Sample records for optimal interpolation method

  1. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  3. A hierarchical transition state search algorithm

    NASA Astrophysics Data System (ADS)

    del Campo, Jorge M.; Köster, Andreas M.

    2008-07-01

    A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.

  4. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  5. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  6. Optimization and comparison of three spatial interpolation methods for electromagnetic levels in the AM band within an urban area.

    PubMed

    Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio

    2018-04-01

    A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction.

    PubMed

    Yang, Qi; Zhang, Yanzhu; Zhao, Tiebiao; Chen, YangQuan

    2017-04-04

    Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Incorporating Linear Synchronous Transit Interpolation into the Growing String Method: Algorithm and Applications.

    PubMed

    Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin

    2011-12-13

    The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).

  9. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  10. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  11. A kriging metamodel-assisted robust optimization method based on a reverse model

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  12. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  13. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  14. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  15. A study on characteristics of retrospective optimal interpolation with WRF testbed

    NASA Astrophysics Data System (ADS)

    Kim, S.; Noh, N.; Lim, G.

    2012-12-01

    This study presents the application of retrospective optimal interpolation (ROI) with Weather Research and Forecasting model (WRF). Song et al. (2009) suggest ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. Song and Lim (2011) improve the method by incorporating eigen-decomposition and covariance inflation. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In this study, ROI method is applied to WRF model to validate the algorithm and to investigate the capability. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance. Using the background error covariance in eigen-space, 1-profile assimilation experiment is performed. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation. The characteristics and strength/weakness of ROI method are investigated by conducting the experiments with other data assimilation method.

  16. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  17. Ortho Image and DTM Generation with Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.

  18. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  19. Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.

    PubMed

    Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar

    2014-01-01

    We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.

  20. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  1. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    PubMed Central

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283

  2. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    PubMed

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  3. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  4. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  5. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  6. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  7. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  8. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  9. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  10. Assimilation of the AVISO Altimetry Data into the Ocean Dynamics Model with a High Spatial Resolution Using Ensemble Optimal Interpolation (EnOI)

    NASA Astrophysics Data System (ADS)

    Kaurkin, M. N.; Ibrayev, R. A.; Belyaev, K. P.

    2018-01-01

    A parallel realization of the Ensemble Optimal Interpolation (EnOI) data assimilation (DA) method in conjunction with the eddy-resolving global circulation model is implemented. The results of DA experiments in the North Atlantic with the assimilation of the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) data from the Jason-1 satellite are analyzed. The results of simulation are compared with the independent temperature and salinity data from the ARGO drifters.

  11. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  12. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  13. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and ordinary kriging. Overall, explanatory variables improve the interpolation results.

  14. Interpolation schemes for peptide rearrangements.

    PubMed

    Bauer, Marianne S; Strodel, Birgit; Fejer, Szilard N; Koslover, Elena F; Wales, David J

    2010-02-07

    A variety of methods (in total seven) comprising different combinations of internal and Cartesian coordinates are tested for interpolation and alignment in connection attempts for polypeptide rearrangements. We consider Cartesian coordinates, the internal coordinates used in CHARMM, and natural internal coordinates, each of which has been interfaced to the OPTIM code and compared with the corresponding results for united-atom force fields. We show that aligning the methylene hydrogens to preserve the sign of a local dihedral angle, rather than minimizing a distance metric, provides significant improvements with respect to connection times and failures. We also demonstrate the superiority of natural coordinate methods in conjunction with internal alignment. Checking the potential energy of the interpolated structures can act as a criterion for the choice of the interpolation coordinate system, which reduces failures and connection times significantly.

  15. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    PubMed

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.

  16. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  17. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  18. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  19. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  20. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  1. Bayesian Tracking of Emerging Epidemics Using Ensemble Optimal Statistical Interpolation

    PubMed Central

    Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D.

    2014-01-01

    We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to handle epidemic data more accurately. The EnOSI and EnKF tracking methods worked well on the main body of the simulated spatial epidemic, but the EnOSI was able to detect and track a distant secondary focus of infection that the EnKF missed entirely. PMID:25113590

  2. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  3. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  4. Gaussian process regression to accelerate geometry optimizations relying on numerical differentiation

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Christiansen, Ove

    2018-06-01

    We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.

  5. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    PubMed

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  6. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    PubMed Central

    Yang, Wei

    2018-01-01

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792

  7. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  8. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  9. A panoramic imaging system based on fish-eye lens

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Hao, Chenyang

    2017-10-01

    Panoramic imaging has been closely watched as one of the major technologies of AR and VR. Mainstream panoramic imaging techniques lenses include fish-eye lenses, image splicing, and catadioptric imaging system. Meanwhile, fish-eyes are widely used in the big picture video surveillance. The advantage of fish-eye lenses is that they are easy to operate and cost less, but how to solve the image distortion of fish-eye lenses has always been a very important topic. In this paper, the image calibration algorithm of fish-eye lens is studied by comparing the method of interpolation, bilinear interpolation and double three interpolation, which are used to optimize the images.

  10. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  11. GPU color space conversion

    NASA Astrophysics Data System (ADS)

    Chase, Patrick; Vondran, Gary

    2011-01-01

    Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.

  12. Application of data assimilation methods for analysis and integration of observed and modeled Arctic Sea ice motions

    NASA Astrophysics Data System (ADS)

    Meier, Walter Neil

    This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.

  13. Continuous assimilation of simulated Geosat altimetric sea level into an eddy-resolving numerical ocean model. I - Sea level differences. II - Referenced sea level differences

    NASA Technical Reports Server (NTRS)

    White, Warren B.; Tai, Chang-Kou; Holland, William R.

    1990-01-01

    The optimal interpolation method of Lorenc (1981) was used to conduct continuous assimilation of altimetric sea level differences from the simulated Geosat exact repeat mission (ERM) into a three-layer quasi-geostrophic eddy-resolving numerical ocean box model that simulates the statistics of mesoscale eddy activity in the western North Pacific. Assimilation was conducted continuously as the Geosat tracks appeared in simulated real time/space, with each track repeating every 17 days, but occurring at different times and locations within the 17-day period, as would have occurred in a realistic nowcast situation. This interpolation method was also used to conduct the assimilation of referenced altimetric sea level differences into the same model, performing the referencing of altimetric sea sevel differences by using the simulated sea level. The results of this dynamical interpolation procedure are compared with those of a statistical (i.e., optimum) interpolation procedure.

  14. An online replanning method using warm start optimization and aperture morphing for flattening-filter-free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Ates,

    Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start”more » optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour required by the SAM process.« less

  15. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  16. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  17. An approach to unbiased subsample interpolation for motion tracking.

    PubMed

    McCormick, Matthew M; Varghese, Tomy

    2013-04-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.

  18. An n -material thresholding method for improving integerness of solutions in topology optimization

    DOE PAGES

    Watts, Seth; Tortorelli, Daniel A.

    2016-04-10

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less

  19. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  20. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  1. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  2. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  3. Research on connection structure of aluminumbody bus using multi-objective topology optimization

    NASA Astrophysics Data System (ADS)

    Peng, Q.; Ni, X.; Han, F.; Rhaman, K.; Ulianov, C.; Fang, X.

    2018-01-01

    For connecting Aluminum Alloy bus body aluminum components often occur the problem of failure, a new aluminum alloy connection structure is designed based on multi-objective topology optimization method. Determining the shape of the outer contour of the connection structure with topography optimization, establishing a topology optimization model of connections based on SIMP density interpolation method, going on multi-objective topology optimization, and improving the design of the connecting piece according to the optimization results. The results show that the quality of the aluminum alloy connector after topology optimization is reduced by 18%, and the first six natural frequencies are improved and the strength performance and stiffness performance are obviously improved.

  4. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.

  5. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  6. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  7. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  8. An Approach to Unbiased Subsample Interpolation for Motion Tracking

    PubMed Central

    McCormick, Matthew M.; Varghese, Tomy

    2013-01-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder–Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique. PMID:23493609

  9. Ambient Ozone Exposure in Czech Forests: A GIS-Based Approach to Spatial Distribution Assessment

    PubMed Central

    Hůnová, I.; Horálek, J.; Schreiberová, M.; Zapletal, M.

    2012-01-01

    Ambient ozone (O3) is an important phytotoxic pollutant, and detailed knowledge of its spatial distribution is becoming increasingly important. The aim of the paper is to compare different spatial interpolation techniques and to recommend the best approach for producing a reliable map for O3 with respect to its phytotoxic potential. For evaluation we used real-time ambient O3 concentrations measured by UV absorbance from 24 Czech rural sites in the 2007 and 2008 vegetation seasons. We considered eleven approaches for spatial interpolation used for the development of maps for mean vegetation season O3 concentrations and the AOT40F exposure index for forests. The uncertainty of maps was assessed by cross-validation analysis. The root mean square error (RMSE) of the map was used as a criterion. Our results indicate that the optimal interpolation approach is linear regression of O3 data and altitude with subsequent interpolation of its residuals by ordinary kriging. The relative uncertainty of the map of O3 mean for the vegetation season is less than 10%, using the optimal method as for both explored years, and this is a very acceptable value. In the case of AOT40F, however, the relative uncertainty of the map is notably worse, reaching nearly 20% in both examined years. PMID:22566757

  10. Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Comiskey, J. G.

    1979-01-01

    This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.

  11. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  12. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  13. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  14. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  15. A comparison of different interpolation methods for wind data in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2017-04-01

    For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst results.

  16. Optimal Interpolation scheme to generate reference crop evapotranspiration

    NASA Astrophysics Data System (ADS)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  17. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  18. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  19. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  20. Using optimal interpolation to assimilate surface measurements and satellite AOD for ozone and PM2.5: A case study for July 2011.

    PubMed

    Tang, Youhua; Chai, Tianfeng; Pan, Li; Lee, Pius; Tong, Daniel; Kim, Hyun-Cheol; Chen, Weiwei

    2015-10-01

    We employed an optimal interpolation (OI) method to assimilate AIRNow ozone/PM2.5 and MODIS (Moderate Resolution Imaging Spectroradiometer) aerosol optical depth (AOD) data into the Community Multi-scale Air Quality (CMAQ) model to improve the ozone and total aerosol concentration for the CMAQ simulation over the contiguous United States (CONUS). AIRNow data assimilation was applied to the boundary layer, and MODIS AOD data were used to adjust total column aerosol. Four OI cases were designed to examine the effects of uncertainty setting and assimilation time; two of these cases used uncertainties that varied in time and location, or "dynamic uncertainties." More frequent assimilation and higher model uncertainties pushed the modeled results closer to the observation. Our comparison over a 24-hr period showed that ozone and PM2.5 mean biases could be reduced from 2.54 ppbV to 1.06 ppbV and from -7.14 µg/m³ to -0.11 µg/m³, respectively, over CONUS, while their correlations were also improved. Comparison to DISCOVER-AQ 2011 aircraft measurement showed that surface ozone assimilation applied to the CMAQ simulation improves regional low-altitude (below 2 km) ozone simulation. This paper described an application of using optimal interpolation method to improve the model's ozone and PM2.5 estimation using surface measurement and satellite AOD. It highlights the usage of the operational AIRNow data set, which is available in near real time, and the MODIS AOD. With a similar method, we can also use other satellite products, such as the latest VIIRS products, to improve PM2.5 prediction.

  1. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  2. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  3. Space Weather Activities of IONOLAB Group: TEC Mapping

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.

    2009-04-01

    Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.

  4. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  5. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  6. High-Fidelity Real-Time Trajectory Optimization for Reusable Launch Vehicles

    DTIC Science & Technology

    2006-12-01

    6.20 Max DR Yawing Moment History. ...............................................................270 Figure 6.21 Snapshot from MATLAB “Profile...Propagation using “ode45” (Euler Angles)...........................................330 Figure 6.114 Interpolated Elevon Controls using Various MATLAB ...Schemes.................332 Figure 6.115 Interpolated Flap Controls using Various MATLAB Schemes.....................333 Figure 6.116 Interpolated

  7. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  8. Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system

    NASA Astrophysics Data System (ADS)

    Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU

    2018-03-01

    The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.

  9. A study on the characteristics of retrospective optimal interpolation using an Observing System Simulation Experiment

    NASA Astrophysics Data System (ADS)

    Kim, Shin-Woo; Noh, Nam-Kyu; Lim, Gyu-Ho

    2013-04-01

    This study presents the introduction of retrospective optimal interpolation (ROI) and its application with Weather Research and Forecasting model (WRF). Song et al. (2009) suggested ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. The assimilation window of ROI algorithm is gradually increased, similar with that of the quasi-static variational assimilation (QSVA; Pires et al., 1996). Unlike QSVA method, however, ROI method assimilates the data at post analysis time using perturbation method (Verlaan and Heemink, 1997) without adjoint model. Song and Lim (2011) improved this method by incorporating eigen-decomposition and covariance inflation. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance which can concentrate ROI analyses on the error variances of governing eigenmodes by transforming the control variables into eigenspace. A total energy norm is used for the normalization of each control variables. In this study, ROI method is applied to WRF model with Observing System Simulation Experiment (OSSE) to validate the algorithm and to investigate the capability. Horizontal wind, pressure, potential temperature, and water vapor mixing ratio are used for control variables and observations. Firstly, 1-profile assimilation experiment is performed. Subsequently, OSSE's are performed using the virtual observing system which consists of synop, ship, and sonde data. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error with the assimilation by ROI. The characteristics and strength/weakness of ROI method are also investigated by conducting the experiments with 3D-Var (3-dimensional variational) method and 4D-Var (4-dimensional variational) method. In the initial time, ROI produces a larger forecast error than that of 4D-Var. However, the difference between the two experimental results is decreased gradually with time, and the ROI shows apparently better result (i.e., smaller forecast error) than that of 4D-Var after 9-hour forecast.

  10. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  11. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  12. Calibration of DEM parameters on shear test experiments using Kriging method

    NASA Astrophysics Data System (ADS)

    Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier

    2017-06-01

    Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.

  13. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  14. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  15. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  16. Dental implant customization using numerical optimization design and 3-dimensional printing fabrication of zirconia ceramic.

    PubMed

    Cheng, Yung-Chang; Lin, Deng-Huei; Jiang, Cho-Pei; Lin, Yuan-Min

    2017-05-01

    This study proposes a new methodology for dental implant customization consisting of numerical geometric optimization and 3-dimensional printing fabrication of zirconia ceramic. In the numerical modeling, exogenous factors for implant shape include the thread pitch, thread depth, maximal diameter of implant neck, and body size. Endogenous factors are bone density, cortical bone thickness, and non-osseointegration. An integration procedure, including uniform design method, Kriging interpolation and genetic algorithm, is applied to optimize the geometry of dental implants. The threshold of minimal micromotion for optimization evaluation was 100 μm. The optimized model is imported to the 3-dimensional slurry printer to fabricate the zirconia green body (powder is bonded by polymer weakly) of the implant. The sintered implant is obtained using a 2-stage sintering process. Twelve models are constructed according to uniform design method and simulated the micromotion behavior using finite element modeling. The result of uniform design models yields a set of exogenous factors that can provide the minimal micromotion (30.61 μm), as a suitable model. Kriging interpolation and genetic algorithm modified the exogenous factor of the suitable model, resulting in 27.11 μm as an optimization model. Experimental results show that the 3-dimensional slurry printer successfully fabricated the green body of the optimization model, but the accuracy of sintered part still needs to be improved. In addition, the scanning electron microscopy morphology is a stabilized t-phase microstructure, and the average compressive strength of the sintered part is 632.1 MPa. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Development of adaptive observation strategy using retrospective optimal interpolation

    NASA Astrophysics Data System (ADS)

    Noh, N.; Kim, S.; Song, H.; Lim, G.

    2011-12-01

    Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.

  18. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  19. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  20. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  1. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  2. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  3. ADS: A FORTRAN program for automated design synthesis: Version 1.10

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1985-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.

  4. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  5. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  6. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  7. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  8. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  9. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  10. A New Methodology for the Extension of the Impact of Data Assimilation on Ocean Wave Prediction

    DTIC Science & Technology

    2008-07-01

    Assimilation method The analysis fields used were corrected by an assimilation method developed at the Norwegian Meteorological Insti- tute ( Breivik and Reistad...523–535 525 becomes equal to the solution obtained by optimal interpolation (see Bratseth 1986 and Breivik and Reistad 1994). The iterations begin with...updated accordingly. A more detailed description of the assimilation method is given in Breivik and Reistad (1994). 2.3 Kolmogorov–Zurbenko filters

  11. Development of WRF-ROI system by incorporating eigen-decomposition

    NASA Astrophysics Data System (ADS)

    Kim, S.; Noh, N.; Song, H.; Lim, G.

    2011-12-01

    This study presents the development of WRF-ROI system, which is the implementation of Retrospective Optimal Interpolation (ROI) to the Weather Research and Forecasting model (WRF). ROI is a new data assimilation algorithm introduced by Song et al. (2009) and Song and Lim (2009). The formulation of ROI is similar with that of Optimal Interpolation (OI), but ROI iteratively assimilates an observation set at a post analysis time into a prior analysis, possibly providing the high quality reanalysis data. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In previous study, ROI method is applied to Lorenz 40-variable model (Lorenz, 1996) to validate the algorithm and to investigate the capability. It is therefore required to apply this ROI method into a more realistic and complicated model framework such as WRF. In this research, the reduced-rank formulation of ROI is used instead of a reduced-resolution method. The computational costs can be reduced due to the eigen-decomposition of background error covariance in the reduced-rank method. When single profile of observations is assimilated in the WRF-ROI system by incorporating eigen-decomposition, the analysis error tends to be reduced if compared with the background error. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation.

  12. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  13. Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements

    PubMed Central

    Kim, Geun-Yeon

    2014-01-01

    The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359

  14. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  15. Research on damping properties optimization of variable-stiffness plate

    NASA Astrophysics Data System (ADS)

    Wen-kai, QI; Xian-tao, YIN; Cheng, SHEN

    2016-09-01

    This paper investigates damping optimization design of variable-stiffness composite laminated plate, which means fibre paths can be continuously curved and fibre angles are distinct for different regions. First, damping prediction model is developed based on modal dissipative energy principle and verified by comparing with modal testing results. Then, instead of fibre angles, the element stiffness and damping matrixes are translated to be design variables on the basis of novel Discrete Material Optimization (DMO) formulation, thus reducing the computation time greatly. Finally, the modal damping capacity of arbitrary order is optimized using MMA (Method of Moving Asymptotes) method. Meanwhile, mode tracking technique is employed to investigate the variation of modal shape. The convergent performance of interpolation function, first order specific damping capacity (SDC) optimization results and variation of modal shape in different penalty factor are discussed. The results show that the damping properties of the variable-stiffness plate can be increased by 50%-70% after optimization.

  16. A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation

    NASA Astrophysics Data System (ADS)

    Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao

    2015-03-01

    Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.

  17. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  18. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  19. Local-scale spatial modelling for interpolating climatic temperature variables to predict agricultural plant suitability

    NASA Astrophysics Data System (ADS)

    Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman

    2016-05-01

    Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.

  20. Combining geostatistics with Moran's I analysis for mapping soil heavy metals in Beijing, China.

    PubMed

    Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo

    2012-03-01

    Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran's I analysis was used to supplement the traditional geostatistics. According to Moran's I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran's I and the standardized Moran's I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran's I analysis was better than traditional geostatistics. Thus, Moran's I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals.

  1. Combining Geostatistics with Moran’s I Analysis for Mapping Soil Heavy Metals in Beijing, China

    PubMed Central

    Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo

    2012-01-01

    Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran’s I analysis was used to supplement the traditional geostatistics. According to Moran’s I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran’s I and the standardized Moran’s I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran’s I analysis was better than traditional geostatistics. Thus, Moran’s I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals. PMID:22690179

  2. Spatial interpolation of pesticide drift from hand-held knapsack sprayers used in potato production

    NASA Astrophysics Data System (ADS)

    Garcia-Santos, Glenda; Pleschberger, Martin; Scheiber, Michael; Pilz, Jürgen

    2017-04-01

    Tropical mountainous regions in developing countries are often neglected in research and policy but represent key areas to be considered if sustainable agricultural and rural development is to be promoted. One example is the lack of information of pesticide drift soil deposition, which can support pesticide risk assessment for soil, surface water, bystanders and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, were obtained within one of the highest potato producing regions in Colombia. Based on the empirical data, different spatial interpolation techniques i.e. Thiessen, inverse distance squared weighting, co-kriging, pair-copulas and drift curves depending on distance and wind speed were tested and optimized. Results of the best performing spatial interpolation methods, suitable curves to assess mean relative drift and implications on risk assessment studies will be presented.

  3. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  4. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  5. A general tool for the evaluation of spiral CT interpolation algorithms: revisiting the effect of pitch in multislice CT.

    PubMed

    Bricault, Ivan; Ferretti, Gilbert

    2005-01-01

    While multislice spiral computed tomography (CT) scanners are provided by all major manufacturers, their specific interpolation algorithms have been rarely evaluated. Because the results published so far relate to distinct particular cases and differ significantly, there are contradictory recommendations about the choice of pitch in clinical practice. In this paper, we present a new tool for the evaluation of multislice spiral CT z-interpolation algorithms, and apply it to the four-slice case. Our software is based on the computation of a "Weighted Radiation Profile" (WRP), and compares WRP to an expected ideal profile in terms of widening and heterogeneity. It provides a unique scheme for analyzing a large variety of spiral CT acquisition procedures. Freely chosen parameters include: number of detector rows, detector collimation, nominal slice width, helical pitch, and interpolation algorithm with any filter shape and width. Moreover, it is possible to study any longitudinal and off-isocenter positions. Theoretical and experimental results show that WRP, more than Slice Sensitivity Profile (SSP), provides a comprehensive characterization of interpolation algorithms. WRP analysis demonstrates that commonly "preferred helical pitches" are actually nonoptimal regarding the formerly distinguished z-sampling gap reduction criterion. It is also shown that "narrow filter" interpolation algorithms do not enable a general preferred pitch discussion, since they present poor properties with large longitudinal and off-center variations. In the more stable case of "wide filter" interpolation algorithms, SSP width or WRP widening are shown to be almost constant. Therefore, optimal properties should no longer be sought in terms of these criteria. On the contrary, WRP heterogeneity is related to variable artifact phenomena and can pertinently characterize optimal pitches. In particular, the exemplary interpolation properties of pitch = 1 "wide filter" mode are demonstrated.

  6. A case study of aerosol data assimilation with the Community Multi-scale Air Quality Model over the contiguous United States using 3D-Var and optimal interpolation methods

    NASA Astrophysics Data System (ADS)

    Tang, Youhua; Pagowski, Mariusz; Chai, Tianfeng; Pan, Li; Lee, Pius; Baker, Barry; Kumar, Rajesh; Delle Monache, Luca; Tong, Daniel; Kim, Hyun-Cheol

    2017-12-01

    This study applies the Gridpoint Statistical Interpolation (GSI) 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP), to improve surface PM2.5 predictions over the contiguous United States (CONUS) by assimilating aerosol optical depth (AOD) and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ) modeling system. An optimal interpolation (OI) method implemented earlier (Tang et al., 2015) for the CMAQ modeling system is also tested for the same period (July 2011) over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter < 2.5 µm) leads to stronger increments in surface PM2.5 compared to its MODIS AOD assimilation at the 550 nm wavelength. In contrast, we find a stronger OI impact of the MODIS AOD on surface aerosols at 18:00 UTC compared to the surface PM2.5 OI method. GSI produces smoother result and yields overall better correlation coefficient and root mean squared error (RMSE). It should be noted that the 3D-Var and OI methods used here have several big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.

  7. A Data Assimilation System For Operational Weather Forecast In Galicia Region (nw Spain)

    NASA Astrophysics Data System (ADS)

    Balseiro, C. F.; Souto, M. J.; Pérez-Muñuzuri, V.; Brewster, K.; Xue, M.

    Regional weather forecast models, such as the Advanced Regional Prediction System (ARPS), over complex environments with varying local influences require an accurate meteorological analysis that should include all local meteorological measurements available. In this work, the ARPS Data Analysis System (ADAS) (Xue et al. 2001) is applied as a three-dimensional weather analysis tool to include surface station and rawinsonde data with the NCEP AVN forecasts as the analysis background. Currently in ADAS, a set of five meteorological variables are considered during the analysis: horizontal grid-relative wind components, pressure, potential temperature and spe- cific humidity. The analysis is used for high resolution numerical weather prediction for the Galicia region. The analysis method used in ADAS is based on the successive corrective scheme of Bratseth (1986), which asymptotically approaches the result of a statistical (optimal) interpolation, but at lower computational cost. As in the optimal interpolation scheme, the Bratseth interpolation method can take into account the rel- ative error between background and observational data, therefore they are relatively insensitive to large variations in data density and can integrate data of mixed accuracy. This method can be applied economically in an operational setting, providing signifi- cant improvement over the background model forecast as well as any analysis without high-resolution local observations. A one-way nesting is applied for weather forecast in Galicia region, and the use of this assimilation system in both domains shows better results not only in initial conditions but also in all forecast periods. Bratseth, A.M. (1986): "Statistical interpolation by means of successive corrections." Tellus, 38A, 439-447. Souto, M. J., Balseiro, C. F., Pérez-Muñuzuri, V., Xue, M. Brewster, K., (2001): "Im- pact of cloud analysis on numerical weather prediction in the galician region of Spain". Submitted to Journal of Applied Meteorology. Xue, M., Wang. D., Gao, J., Brewster, K, Droegemeier, K. K., (2001): "The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation". Meteor. Atmos Physics. Accepted

  8. Effect of the precipitation interpolation method on the performance of a snowmelt runoff model

    NASA Astrophysics Data System (ADS)

    Jacquin, Alexandra

    2014-05-01

    Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method, HBV parameters are calibrated with respect to Nash-Sutcliffe efficiency. The performance of HBV in the study catchment is not satisfactory. Although volumetric errors are modest, efficiency values are lower than 70%. Discharge estimates resulting from the application of TP, MFF and IDW obtain similar model efficiencies and volumetric errors. These error statistics moderately improve if KED or OIM are used instead. Even though the quality of precipitation estimates of distinct interpolation methods is dissimilar, the results of this study show that these differences do not necessarily produce noticeable changes in HBV's model performance statistics. This situation arises because the calibration of the model parameters allows some degree of compensation of deficient areal precipitation estimates, mainly through the adjustment of model simulated evaporation and glacier melt, as revealed by the analysis of water balances. In general, even if there is a good agreement between model estimated and observed discharge, this information is not sufficient to assert that the internal hydrological processes of the catchment are properly simulated by a watershed model. Other calibration criteria should be incorporated if a more reliable representation of these processes is desired. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279. The HBV Light software used in this study was kindly provided by J. Seibert, Department of Geography, University of Zürich.

  9. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  10. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  12. High-Throughput Assay Optimization and Statistical Interpolation of Rubella-Specific Neutralizing Antibody Titers

    PubMed Central

    Lambert, Nathaniel D.; Pankratz, V. Shane; Larrabee, Beth R.; Ogee-Nwankwo, Adaeze; Chen, Min-hsin; Icenogle, Joseph P.

    2014-01-01

    Rubella remains a social and economic burden due to the high incidence of congenital rubella syndrome (CRS) in some countries. For this reason, an accurate and efficient high-throughput measure of antibody response to vaccination is an important tool. In order to measure rubella-specific neutralizing antibodies in a large cohort of vaccinated individuals, a high-throughput immunocolorimetric system was developed. Statistical interpolation models were applied to the resulting titers to refine quantitative estimates of neutralizing antibody titers relative to the assayed neutralizing antibody dilutions. This assay, including the statistical methods developed, can be used to assess the neutralizing humoral immune response to rubella virus and may be adaptable for assessing the response to other viral vaccines and infectious agents. PMID:24391140

  13. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  14. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    PubMed

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  15. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  16. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  17. Development and evaluation of a new 3-D digitization and computer graphic system to study the anatomic tissue and restoration surfaces.

    PubMed

    Dastane, A; Vaidyanathan, T K; Vaidyanathan, J; Mehra, R; Hesby, R

    1996-01-01

    It is necessary to visualize and reconstruct tissue anatomic surfaces accurately for a variety of oral rehabilitation applications such as surface wear characterization and automated fabrication of dental restorations, accuracy of reproduction of impression and die materials, etc. In this investigation, a 3-D digitization and computer-graphic system was developed for surface characterization. The hardware consists of a profiler assembly for digitization in an MTS biomechanical test system with an artificial mouth, an IBM PS/2 computer model 70 for data processing and a Hewlett-Packard laser printer for hardcopy outputs. The software used includes a commercially available Surfer 3-D graphics package, a public domain data-fitting alignment software and an inhouse Pascal program for intercommunication plus some other limited tasks. Surfaces were digitized before and after rotation by angular displacement, the digital data were interpolated by Surfer to provide a data grid and the surfaces were computer graphically reconstructed: Misaligned surfaces were aligned by the data-fitting alignment software under different choices of parameters. The effect of different interpolation parameters (e.g. grid size, method of interpolation) and extent of rotation on the alignment accuracy was determined. The results indicate that improved alignment accuracy results from optimization of interpolation parameters and minimization of the initial misorientation between the digitized surfaces. The method provides important advantages for surface reconstruction and visualization, such as overlay of sequentially generated surfaces and accurate alignment of pairs of surfaces with small misalignment.

  18. Prediction of sonic boom from experimental near-field overpressure data. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Reiners, S. J.

    1975-01-01

    A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.

  19. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation

    PubMed Central

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-01-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522

  20. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  1. Well-posedness and decay for the dissipative system modeling electro-hydrodynamics in negative Besov spaces

    NASA Astrophysics Data System (ADS)

    Zhao, Jihong; Liu, Qiao

    2017-07-01

    In Guo and Wang (2012) [10], Y. Guo and Y. Wang developed a general new energy method for proving the optimal time decay rates of the solutions to dissipative equations. In this paper, we generalize this method in the framework of homogeneous Besov spaces. Moreover, we apply this method to a model arising from electro-hydrodynamics, which is a strongly coupled system of the Navier-Stokes equations and the Poisson-Nernst-Planck equations through charge transport and external forcing terms. We show that some weighted negative Besov norms of solutions are preserved along time evolution, and obtain the optimal time decay rates of the higher-order spatial derivatives of solutions by the Fourier splitting approach and the interpolation techniques.

  2. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  3. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  4. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  5. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  6. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  7. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  8. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.

  9. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  10. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  11. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required. - Highlights: • Higher-order cubature points for degrees 7 to 9 are developed. • The effects of quadrature rule on the mass and stiffness matrices has been conducted. • The cubature points have always positive integration weights. • Freeing from the inversion of a wide bandwidth mass matrix. • The accuracy of the TSEM has been improved in about one order of magnitude.« less

  12. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  13. Introduction to Numerical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoonover, Joseph A.

    2016-06-14

    These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

  14. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  16. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  17. Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.

    2016-12-01

    Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.

  18. Trajectory Optimization for Helicopter Unmanned Aerial Vehicles (UAVs)

    DTIC Science & Technology

    2010-06-01

    the Nth-order derivative of the Legendre Polynomial ( )NL t . Using this method, the range of integration is transformed universally to [-1,+1...which is the interval for Legendre Polynomials . Although the LGL interpolation points are not evenly spaced, they are symmetric about the midpoint 0...the vehicle’s kinematic constraints are parameterized in terms of polynomials of sufficient order, (2) A collision-free criterion is developed and

  19. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  20. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    NASA Astrophysics Data System (ADS)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  1. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  2. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    NASA Astrophysics Data System (ADS)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  3. Optimization of pressure probe placement and data analysis of engine-inlet distortion

    NASA Astrophysics Data System (ADS)

    Walter, S. F.

    The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.

  4. SU-E-J-126: An Online Replanning Method for FFF Beams Without Couch Shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, E; Ates, O; Li, X

    2015-06-15

    Purpose: In a situation that couch shift for patient positioning is not preferred or prohibited (e.g., MR-Linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening filter free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here we propose a new 2-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online steps. The offline step is to create a series of pre-shifted plans (PSP) obtained by a so calledmore » “warm start” optimization (starting optimization from the original plan, rather from scratch) at a series of isocenter shifts with fixed distance (e.g. 2 cm, at x,y,z = 2,0,0 ; 2,2,0 ; 0,2,0; …;− 2,0,0). The PSPs all have the same number of segments with very similar shapes, since the warm-start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated, and instantaneously fast (no optimization or dose calculation needed). The previously-developed SAM algorithm is then applied for daily deformation. We tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusion: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation requiring no additional time from the SAM process. This research was supported by Elekta inc. (Crawley, UK)« less

  5. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  6. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  7. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  8. Interpolating precipitation and its relation to runoff and non-point source pollution.

    PubMed

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  9. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  10. A multi-material topology optimization approach for wrinkle-free design of cable-suspended membrane structures

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-06-01

    In order to eliminate stress-related wrinkles in cable-suspended membrane structures and to provide simple and reliable deployment, this study presents a multi-material topology optimization model and an effective solution procedure for generating optimal connected layouts for membranes and cables. On the basis of the principal stress criterion of membrane wrinkling behavior and the density-based interpolation of multi-phase materials, the optimization objective is to maximize the total structural stiffness while satisfying principal stress constraints and specified material volume requirements. By adopting the cosine-type relaxation scheme to avoid the stress singularity phenomenon, the optimization model is successfully solved through a standard gradient-based algorithm. Four-corner tensioned membrane structures with different loading cases were investigated to demonstrate the effectiveness of the proposed method in automatically finding the optimal design composed of curved boundary cables and wrinkle-free membranes.

  11. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  12. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  13. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  14. 3-D Characterization of Seismic Properties at the Smart Weapons Test Range, YPG

    DTIC Science & Technology

    2001-10-01

    confidence limits around each interpolated value. Ground truth was accomplished through cross-hole seismic measurements and borehole logs. Surface wave... seismic method, as well as estimating the optimal orientation and spacing of the seismic array . A variety of sources and receivers was evaluated...location within the array is partially related to at least two seismic lines. Either through good fortune or foresight by the designers of the SWTR site

  15. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  16. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  17. On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids

    NASA Astrophysics Data System (ADS)

    Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.

    2017-03-01

    The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.

  18. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  19. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  20. Systematic interpolation method predicts protein chromatographic elution with salt gradients, pH gradients and combined salt/pH gradients.

    PubMed

    Creasy, Arch; Barker, Gregory; Carta, Giorgio

    2017-03-01

    A methodology is presented to predict protein elution behavior from an ion exchange column using both individual or combined pH and salt gradients based on high-throughput batch isotherm data. The buffer compositions are first optimized to generate linear pH gradients from pH 5.5 to 7 with defined concentrations of sodium chloride. Next, high-throughput batch isotherm data are collected for a monoclonal antibody on the cation exchange resin POROS XS over a range of protein concentrations, salt concentrations, and solution pH. Finally, a previously developed empirical interpolation (EI) method is extended to describe protein binding as a function of the protein and salt concentration and solution pH without using an explicit isotherm model. The interpolated isotherm data are then used with a lumped kinetic model to predict the protein elution behavior. Experimental results obtained for laboratory scale columns show excellent agreement with the predicted elution curves for both individual or combined pH and salt gradients at protein loads up to 45 mg/mL of column. Numerical studies show that the model predictions are robust as long as the isotherm data cover the range of mobile phase compositions where the protein actually elutes from the column. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  2. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  3. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  4. Efficient model reduction of parametrized systems by matrix discrete empirical interpolation

    NASA Astrophysics Data System (ADS)

    Negri, Federico; Manzoni, Andrea; Amsallem, David

    2015-12-01

    In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.

  5. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  6. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  7. Transactions of The Army Conference on Applied Mathematics and Computing (5th) Held in West Point, New York on 15-18 June 1987

    DTIC Science & Technology

    1988-03-01

    29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least

  8. Multi-Objective Optimization of Mixed Variable, Stochastic Systems Using Single-Objective Formulations

    DTIC Science & Technology

    2008-03-01

    investigated, as well as the methodology used . Chapter IV presents the data collection and analysis procedures, and the resulting analysis and...interpolate the data, although a non-interpolating model is possible. For this research Design and Analysis of Computer Experiments (DACE) is used ...followed by the analysis . 4.1. Testing Approach The initial SMOMADS algorithm used for this research was acquired directly from Walston [70]. The

  9. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    NASA Astrophysics Data System (ADS)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  10. Data-driven in computational plasticity

    NASA Astrophysics Data System (ADS)

    Ibáñez, R.; Abisset-Chavanne, E.; Cueto, E.; Chinesta, F.

    2018-05-01

    Computational mechanics is taking an enormous importance in industry nowadays. On one hand, numerical simulations can be seen as a tool that allows the industry to perform fewer experiments, reducing costs. On the other hand, the physical processes that are intended to be simulated are becoming more complex, requiring new constitutive relationships to capture such behaviors. Therefore, when a new material is intended to be classified, an open question still remains: which constitutive equation should be calibrated. In the present work, the use of model order reduction techniques are exploited to identify the plastic behavior of a material, opening an alternative route with respect to traditional calibration methods. Indeed, the main objective is to provide a plastic yield function such that the mismatch between experiments and simulations is minimized. Therefore, once the experimental results just like the parameterization of the plastic yield function are provided, finding the optimal plastic yield function can be seen either as a traditional optimization or interpolation problem. It is important to highlight that the dimensionality of the problem is equal to the number of dimensions related to the parameterization of the yield function. Thus, the use of sparse interpolation techniques seems almost compulsory.

  11. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  12. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  13. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Investigation of digital timing resolution and further improvement by using constant fraction signal time marker slope for fast scintillator detectors

    NASA Astrophysics Data System (ADS)

    Singh, Kundan; Siwal, Davinder

    2018-04-01

    A digital timing algorithm is explored for fast scintillator detectors, viz. LaBr3, BaF2, and BC501A. Signals were collected with CAEN 250 mega samples per second (MSPS) and 500 MSPS digitizers. The zero crossing time markers (TM) were obtained with a standard digital constant fraction timing (DCF) method. Accurate timing information is obtained using cubic spline interpolation of a DCF transient region sample points. To get the best time-of-flight (TOF) resolution, an optimization of DCF parameters is performed (delay and constant fraction) for each pair of detectors: (BaF2-LaBr3), (BaF2-BC501A), and (LaBr3-BC501A). In addition, the slope information of an interpolated DCF signal is extracted at TM position. This information gives a new insight to understand the broadening in TOF, obtained for a given detector pair. For a pair of signals having small relative slope and interpolation deviations at TM, leads to minimum time broadening. However, the tailing in TOF spectra is dictated by the interplay between the interpolation error and slope variations. Best TOF resolution achieved at the optimum DCF parameters, can be further improved by using slope parameter. Guided by the relative slope parameter, events selection can be imposed which leads to reduction in TOF broadening. While the method sets a trade-off between timing response and coincidence efficiency, it provides an improvement in TOF. With the proposed method, the improved TOF resolution (FWHM) for the aforementioned detector pairs are; 25% (0.69 ns), 40% (0.74 ns), 53% (0.6 ns) respectively, obtained with 250 MSPS, and corresponds to 12% (0.37 ns), 33% (0.72 ns), 35% (0.69 ns) respectively with 500 MSPS digitizers. For the same detector pair, event survival probabilities are; 57%, 58%, 51% respectively with 250 MSPS and becomes 63%, 57%, 68% using 500 MSPS digitizers.

  15. New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda

    2014-05-01

    The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.

  16. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  17. Quantum realization of the nearest neighbor value interpolation method for INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping

    2018-07-01

    This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.

  18. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  19. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  20. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  1. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  2. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  3. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  4. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  5. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  6. Cryo-EM image alignment based on nonuniform fast Fourier transform.

    PubMed

    Yang, Zhengfan; Penczek, Pawel A

    2008-08-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.

  7. Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform

    PubMed Central

    Yang, Zhengfan; Penczek, Pawel A.

    2008-01-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351

  8. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  9. Autocorrelation Analysis Combined with a Wavelet Transform Method to Detect and Remove Cosmic Rays in a Single Raman Spectrum.

    PubMed

    Maury, Augusto; Revilla, Reynier I

    2015-08-01

    Cosmic rays (CRs) occasionally affect charge-coupled device (CCD) detectors, introducing large spikes with very narrow bandwidth in the spectrum. These CR features can distort the chemical information expressed by the spectra. Consequently, we propose here an algorithm to identify and remove significant spikes in a single Raman spectrum. An autocorrelation analysis is first carried out to accentuate the CRs feature as outliers. Subsequently, with an adequate selection of the threshold, a discrete wavelet transform filter is used to identify CR spikes. Identified data points are then replaced by interpolated values using the weighted-average interpolation technique. This approach only modifies the data in a close vicinity of the CRs. Additionally, robust wavelet transform parameters are proposed (a desirable property for automation) after optimizing them with the application of the method in a great number of spectra. However, this algorithm, as well as all the single-spectrum analysis procedures, is limited to the cases in which CRs have much narrower bandwidth than the Raman bands. This might not be the case when low-resolution Raman instruments are used.

  10. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  11. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  12. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  13. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  14. Electroencephalography (EEG) forward modeling via H(div) finite element sources with focal interpolation.

    PubMed

    Pursiainen, S; Vorwerk, J; Wolters, C H

    2016-12-21

    The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.

  15. Conical intersection seams in polyenes derived from their chemical composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nenov, Artur; Vivie-Riedle, Regina de

    2012-08-21

    The knowledge of conical intersection seams is important to predict and explain the outcome of ultrafast reactions in photochemistry and photobiology. They define the energetic low-lying reachable regions that allow for the ultrafast non-radiative transitions. In complex molecules it is not straightforward to locate them. We present a systematic approach to predict conical intersection seams in multifunctionalized polyenes and their sensitivity to substituent effects. Included are seams that facilitate the photoreaction of interest as well as seams that open competing loss channels. The method is based on the extended two-electron two-orbital method [A. Nenov and R. de Vivie-Riedle, J. Chem.more » Phys. 135, 034304 (2011)]. It allows to extract the low-lying regions for non-radiative transitions, which are then divided into small linear segments. Rules of thumb are introduced to find the support points for these segments, which are then used in a linear interpolation scheme for a first estimation of the intersection seams. Quantum chemical optimization of the linear interpolated structures yields the final energetic position. We demonstrate our method for the example of the electrocyclic isomerization of trifluoromethyl-pyrrolylfulgide.« less

  16. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  17. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    NASA Astrophysics Data System (ADS)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  18. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  19. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  20. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  1. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  2. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  3. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  4. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    PubMed

    Liu, Wei; Du, Peijun; Wang, Dongchen

    2015-01-01

    One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  5. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  6. Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment

    NASA Astrophysics Data System (ADS)

    Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.

    2016-07-01

    The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.

  7. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  8. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  9. The Structure of Optimum Interpolation Functions.

    DTIC Science & Technology

    1983-02-01

    Daniel F. Merriam, ed., Plenum Press, 1970. 2. Hiroshi Akima, "Comments on ’Optimal Contour Mapping Using Universal Kriging’ by Ricardo 0. Olea ," (with...Kriging," Mathematical Geology 14 (1982), 249-257. 21 27. Ricardo 0. Olea , "Optimal Contour Mapping Using Universal Kriging," J. of Geophysical Res. 79

  10. Optimizing Placement of Weather Stations: Exploring Objective Functions of Meaningful Combinations of Multiple Weather Variables

    NASA Astrophysics Data System (ADS)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2017-12-01

    Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these errors to those of manually designed weather station networks in West Africa, planned by the respective host-country's meteorological agency.

  11. Optimized blind gamma-ray pulsar searches at fixed computing budget

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less

  12. Research on the optimization of air quality monitoring station layout based on spatial grid statistical analysis method.

    PubMed

    Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An

    2018-05-01

    In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.

  13. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. An Immersed Boundary method with divergence-free velocity interpolation and force spreading

    NASA Astrophysics Data System (ADS)

    Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.

    2017-10-01

    The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.

  15. Dynamic graphs, community detection, and Riemannian geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun

    A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less

  16. Fast image interpolation via random forests.

    PubMed

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  17. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  18. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  19. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  20. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  1. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    PubMed

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  2. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  3. Illumination estimation via thin-plate spline interpolation.

    PubMed

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  4. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  5. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  6. Interpolation of diffusion weighted imaging datasets.

    PubMed

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R

    2014-12-01

    Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments. Copyright © 2014. Published by Elsevier Inc.

  7. Studying the Global Bifurcation Involving Wada Boundary Metamorphosis by a Method of Generalized Cell Mapping with Sampling-Adaptive Interpolation

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng

    In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.

  8. Minimization of Poisson’s ratio in anti-tetra-chiral two-phase structure

    NASA Astrophysics Data System (ADS)

    Idczak, E.; Strek, T.

    2017-10-01

    One of the most important goal of modern material science is designing structures which exhibit appropriate properties. These properties can be obtained by optimization methods which often use numerical calculations e.g. finite element method (FEM). This paper shows the results of topological optimization which is used to obtain the greatest possible negative Poisson’s ratio of the two-phase composite. The shape is anti-tetra-chiral two-dimensional unit cell of the whole lattice structure which has negative Poisson’s ratio when it is built of one solid material. Two phase used in optimization are two solid materials with positive Poisson’s ratio and Young’s modulus. Distribution of reinforcement hard material inside soft matrix material in anti-tetra-chiral domain influenced mechanical properties of structure. The calculations shows that the resultant structure has negative Poisson’s ratio even eight times smaller than homogenous anti-tetra chiral structure made of classic one material. In the analysis FEM is connected with algorithm Method of Moving Asymptote (MMA). The results of materials’ properties parameters are described and calculated by means of shape interpolation scheme - Solid Isotropic Material with Penalization (SIMP) method.

  9. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a neural network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. The neural network technique works well not only as an interpolating device but also as an extrapolating device to achieve blade designs from a given database. Two validating test cases are discussed.

  10. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network.

    PubMed

    Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao

    2017-12-26

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.

  11. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    NASA Astrophysics Data System (ADS)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  12. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  13. Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates

    NASA Astrophysics Data System (ADS)

    Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma

    2016-10-01

    This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.

  14. Restoring method for missing data of spatial structural stress monitoring based on correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  15. Meshless Modeling of Deformable Shapes and their Motion

    PubMed Central

    Adams, Bart; Ovsjanikov, Maks; Wand, Michael; Seidel, Hans-Peter; Guibas, Leonidas J.

    2010-01-01

    We present a new framework for interactive shape deformation modeling and key frame interpolation based on a meshless finite element formulation. Starting from a coarse nodal sampling of an object’s volume, we formulate rigidity and volume preservation constraints that are enforced to yield realistic shape deformations at interactive frame rates. Additionally, by specifying key frame poses of the deforming shape and optimizing the nodal displacements while targeting smooth interpolated motion, our algorithm extends to a motion planning framework for deformable objects. This allows reconstructing smooth and plausible deformable shape trajectories in the presence of possibly moving obstacles. The presented results illustrate that our framework can handle complex shapes at interactive rates and hence is a valuable tool for animators to realistically and efficiently model and interpolate deforming 3D shapes. PMID:24839614

  16. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  17. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  18. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  19. Evaluation of non-rigid registration parameters for atlas-based segmentation of CT images of human cochlea

    NASA Astrophysics Data System (ADS)

    Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.

    2017-02-01

    Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.

  20. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  1. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  2. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  3. Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter

    2016-01-01

    This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.

  4. Dynamics of open quantum systems by interpolation of von Neumann and classical master equations, and its application to quantum annealing

    NASA Astrophysics Data System (ADS)

    Kadowaki, Tadashi

    2018-02-01

    We propose a method to interpolate dynamics of von Neumann and classical master equations with an arbitrary mixing parameter to investigate the thermal effects in quantum dynamics. The two dynamics are mixed by intervening to continuously modify their solutions, thus coupling them indirectly instead of directly introducing a coupling term. This maintains the quantum system in a pure state even after the introduction of thermal effects and obtains not only a density matrix but also a state vector representation. Further, we demonstrate that the dynamics of a two-level system can be rewritten as a set of standard differential equations, resulting in quantum dynamics that includes thermal relaxation. These equations are equivalent to the optical Bloch equations at the weak coupling and asymptotic limits, implying that the dynamics cause thermal effects naturally. Numerical simulations of ferromagnetic and frustrated systems support this idea. Finally, we use this method to study thermal effects in quantum annealing, revealing nontrivial performance improvements for a spin glass model over a certain range of annealing time. This result may enable us to optimize the annealing time of real annealing machines.

  5. Persons Camp Using Interpolation Method

    NASA Astrophysics Data System (ADS)

    Tawfiq, Luma Naji Mohammed; Najm Abood, Israa

    2018-05-01

    The aim of this paper is to estimate the rate of contaminated soils by using suitable interpolation method as an alternative accurate tool to evaluate the concentration of heavy metals in soil then compared with standard universal value to determine the rate of contamination in the soil. In particular, interpolation methods are extensively applied in the models of the different phenomena where experimental data must be used in computer studies where expressions of those data are required. In this paper the extended divided difference method in two dimensions is used to solve suggested problem. Then, the modification method is applied to estimate the rate of contaminated soils of displaced persons camp in Diyala Governorate, in Iraq.

  6. Rtop - an R package for interpolation along the stream network

    NASA Astrophysics Data System (ADS)

    Skøien, J. O.

    2009-04-01

    Rtop - an R package for interpolation along the stream network Geostatistical methods have been used to a limited extent for estimation along stream networks, with a few exceptions(Gottschalk, 1993; Gottschalk, et al., 2006; Sauquet, et al., 2000; Skøien, et al., 2006). Interpolation of runoff characteristics are more complicated than the traditional random variables estimated by geostatistical methods, as the measurements have a more complicated support, and many catchments are nested. Skøien et al. (2006) presented the model Top-kriging which takes these effects into account for interpolation of stream flow characteristics (exemplified by the 100 year flood). The method has here been implemented as a package in the statistical environment R (R Development Core Team, 2004). Taking advantage of the existing methods in R for working with spatial objects, and the extensive possibilities for visualizing the result, this makes it considerably easier to apply the method on new data sets, in comparison to earlier implementation of the method. Gottschalk, L. 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., I. Krasovskaia, E. Leblois, and E. Sauquet. 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Development Core Team. 2004. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Sauquet, E., L. Gottschalk, and E. Leblois. 2000. Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J. O., R. Merz, and G. Blöschl. 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.

  7. View-interpolation of sparsely sampled sinogram using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Lee, Jongha; Cho, Suengryong

    2017-02-01

    Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

  8. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.

  9. Pricing and simulation for real estate index options: Radial basis point interpolation

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  10. Antenna pattern interpolation by generalized Whittaker reconstruction

    NASA Astrophysics Data System (ADS)

    Tjonneland, K.; Lindley, A.; Balling, P.

    Whittaker reconstruction is an effective tool for interpolation of band limited data. Whittaker originally introduced the interpolation formula termed the cardinal function as the function that represents a set of equispaced samples but has no periodic components of period less than twice the sample spacing. It appears that its use for reflector antennas was pioneered in France. The method is now a useful tool in the analysis and design of multiple beam reflector antenna systems. A good description of the method has been given by Bucci et al. This paper discusses some problems encountered with the method and their solution.

  11. Visualizing and Understanding the Components of Lagrange and Newton Interpolation

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…

  12. An Extended Kriging Method to Interpolate Near-Surface Soil Moisture Data Measured by Wireless Sensor Networks

    PubMed Central

    Zhang, Jialin; Li, Xiuhong; Yang, Rongjin; Liu, Qiang; Zhao, Long; Dou, Baocheng

    2017-01-01

    In the practice of interpolating near-surface soil moisture measured by a wireless sensor network (WSN) grid, traditional Kriging methods with auxiliary variables, such as Co-kriging and Kriging with external drift (KED), cannot achieve satisfactory results because of the heterogeneity of soil moisture and its low correlation with the auxiliary variables. This study developed an Extended Kriging method to interpolate with the aid of remote sensing images. The underlying idea is to extend the traditional Kriging by introducing spectral variables, and operating on spatial and spectral combined space. The algorithm has been applied to WSN-measured soil moisture data in HiWATER campaign to generate daily maps from 10 June to 15 July 2012. For comparison, three traditional Kriging methods are applied: Ordinary Kriging (OK), which used WSN data only, Co-kriging and KED, both of which integrated remote sensing data as covariate. Visual inspections indicate that the result from Extended Kriging shows more spatial details than that of OK, Co-kriging, and KED. The Root Mean Square Error (RMSE) of Extended Kriging was found to be the smallest among the four interpolation results. This indicates that the proposed method has advantages in combining remote sensing information and ground measurements in soil moisture interpolation. PMID:28617351

  13. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  14. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    PubMed

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  16. The Use of Daily Geodetic UT1 and LOD Data in the Optimal Estimation of UT1 and LOD With the JPL Kalman Earth Orientation Filter

    NASA Technical Reports Server (NTRS)

    Freedman, A. P.; Steppe, J. A.

    1995-01-01

    The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.

  17. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  18. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2018-01-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  19. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  20. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network

    PubMed Central

    Zhou, Shenglu; Su, Quanlong; Yi, Haomin

    2017-01-01

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363

  1. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

    PubMed

    Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

    2014-04-01

    Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.

  2. Using geographical information systems and cartograms as a health service quality improvement tool.

    PubMed

    Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek

    2014-07-01

    Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  4. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  5. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2014-12-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  6. Interpolated memory tests reduce mind wandering and improve learning of online lectures.

    PubMed

    Szpunar, Karl K; Khan, Novall Y; Schacter, Daniel L

    2013-04-16

    The recent emergence and popularity of online educational resources brings with it challenges for educators to optimize the dissemination of online content. Here we provide evidence that points toward a solution for the difficulty that students frequently report in sustaining attention to online lectures over extended periods. In two experiments, we demonstrate that the simple act of interpolating online lectures with memory tests can help students sustain attention to lecture content in a manner that discourages task-irrelevant mind wandering activities, encourages task-relevant note-taking activities, and improves learning. Importantly, frequent testing was associated with reduced anxiety toward a final cumulative test and also with reductions in subjective estimates of cognitive demand. Our findings suggest a potentially key role for interpolated testing in the development and dissemination of online educational content.

  7. Interpolated memory tests reduce mind wandering and improve learning of online lectures

    PubMed Central

    Szpunar, Karl K.; Khan, Novall Y.; Schacter, Daniel L.

    2013-01-01

    The recent emergence and popularity of online educational resources brings with it challenges for educators to optimize the dissemination of online content. Here we provide evidence that points toward a solution for the difficulty that students frequently report in sustaining attention to online lectures over extended periods. In two experiments, we demonstrate that the simple act of interpolating online lectures with memory tests can help students sustain attention to lecture content in a manner that discourages task-irrelevant mind wandering activities, encourages task-relevant note-taking activities, and improves learning. Importantly, frequent testing was associated with reduced anxiety toward a final cumulative test and also with reductions in subjective estimates of cognitive demand. Our findings suggest a potentially key role for interpolated testing in the development and dissemination of online educational content. PMID:23576743

  8. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    PubMed

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  9. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  10. A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for high Reynolds number laminar flows

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1988-01-01

    A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables were interpolated using complete quadratic shape functions and the pressure was interpolated using linear shape functions. For the two dimensional case, the pressure is defined on a triangular element which is contained inside the complete biquadratic element for velocity variables; and for the three dimensional case, the pressure is defined on a tetrahedral element which is again contained inside the complete tri-quadratic element. Thus the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow for Reynolds number of 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorable with those of the finite difference methods as well as experimental data available. A finite elememt computer program for incompressible, laminar flows is presented.

  11. Assimilation of remote sensing observations into a sediment transport model of China's largest freshwater lake: spatial and temporal effects.

    PubMed

    Zhang, Peng; Chen, Xiaoling; Lu, Jianzhong; Zhang, Wei

    2015-12-01

    Numerical models are important tools that are used in studies of sediment dynamics in inland and coastal waters, and these models can now benefit from the use of integrated remote sensing observations. This study explores a scheme for assimilating remotely sensed suspended sediment (from charge-coupled device (CCD) images obtained from the Huanjing (HJ) satellite) into a two-dimensional sediment transport model of Poyang Lake, the largest freshwater lake in China. Optimal interpolation is used as the assimilation method, and model predictions are obtained by combining four remote sensing images. The parameters for optimal interpolation are determined through a series of assimilation experiments evaluating the sediment predictions based on field measurements. The model with assimilation of remotely sensed sediment reduces the root-mean-square error of the predicted sediment concentrations by 39.4% relative to the model without assimilation, demonstrating the effectiveness of the assimilation scheme. The spatial effect of assimilation is explored by comparing model predictions with remotely sensed sediment, revealing that the model with assimilation generates reasonable spatial distribution patterns of suspended sediment. The temporal effect of assimilation on the model's predictive capabilities varies spatially, with an average temporal effect of approximately 10.8 days. The current velocities which dominate the rate and direction of sediment transport most likely result in spatial differences in the temporal effect of assimilation on model predictions.

  12. Multi-level adaptive finite element methods. 1: Variation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1979-01-01

    A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.

  13. Fully probabilistic control design in an adaptive critic framework.

    PubMed

    Herzallah, Randa; Kárný, Miroslav

    2011-12-01

    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A path following algorithm for the graph matching problem.

    PubMed

    Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe

    2009-12-01

    We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

  15. Effects of empty bins on image upscaling in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2017-07-01

    This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.

  16. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  17. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  18. An edge-directed interpolation method for fetal spine MR images.

    PubMed

    Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin

    2013-10-10

    Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.

  19. An interpolated activity during the knowledge-of-results delay interval eliminates the learning advantages of self-controlled feedback schedules.

    PubMed

    Carter, Michael J; Ste-Marie, Diane M

    2017-03-01

    The learning advantages of self-controlled knowledge-of-results (KR) schedules compared to yoked schedules have been linked to the optimization of the informational value of the KR received for the enhancement of one's error-detection capabilities. This suggests that information-processing activities that occur after motor execution, but prior to receiving KR (i.e., the KR-delay interval) may underlie self-controlled KR learning advantages. The present experiment investigated whether self-controlled KR learning benefits would be eliminated if an interpolated activity was performed during the KR-delay interval. Participants practiced a waveform matching task that required two rapid elbow extension-flexion reversals in one of four groups using a factorial combination of choice (self-controlled, yoked) and KR-delay interval (empty, interpolated). The waveform had specific spatial and temporal constraints, and an overall movement time goal. The results indicated that the self-controlled + empty group had superior retention and transfer scores compared to all other groups. Moreover, the self-controlled + interpolated and yoked + interpolated groups did not differ significantly in retention and transfer; thus, the interpolated activity eliminated the typically found learning benefits of self-controlled KR. No significant differences were found between the two yoked groups. We suggest the interpolated activity interfered with information-processing activities specific to self-controlled KR conditions that occur during the KR-delay interval and that these activities are vital for reaping the associated learning benefits. These findings add to the growing evidence that challenge the motivational account of self-controlled KR learning advantages and instead highlights informational factors associated with the KR-delay interval as an important variable for motor learning under self-controlled KR schedules.

  20. Using Chebyshev polynomial interpolation to improve the computational efficiency of gravity models near an irregularly-shaped asteroid

    NASA Astrophysics Data System (ADS)

    Hu, Shou-Cun; Ji, Jiang-Hui

    2017-12-01

    In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.

  1. A general method for generating bathymetric data for hydrodynamic computer models

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  2. Integrating TITAN2D Geophysical Mass Flow Model with GIS

    NASA Astrophysics Data System (ADS)

    Namikawa, L. M.; Renschler, C.

    2005-12-01

    TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.

  3. Bilinear modeling and nonlinear estimation

    NASA Technical Reports Server (NTRS)

    Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.

    1989-01-01

    New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.

  4. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  5. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    PubMed

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  6. Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.

    PubMed

    Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik

    2007-01-01

    In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.

  7. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  8. Spatiotemporal Interpolation of Elevation Changes Derived from Satellite Altimetry for Jakobshavn Isbrae, Greenland

    NASA Technical Reports Server (NTRS)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sorensen, L. S.; Joughin, I. R.; Davis, C. H.; Krabill, W. B.

    2012-01-01

    Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dHdt) data over the area of interest. The largest dHdt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dHdt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbr, an outlet glacier for which widespread airborne validation data are available from NASAs Airborne Topographic Mapper (ATM). The four methods are ordinary kriging (OK), kriging with external drift (KED), where the spatial pattern of surface velocity is used as a proxy for that of dHdt, and their spatiotemporal equivalents (ST-OK and ST-KED).

  9. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  10. Real-time image-based B-mode ultrasound image simulation of needles using tensor-product interpolation.

    PubMed

    Zhu, Mengchen; Salcudean, Septimiu E

    2011-07-01

    In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.

  11. Segmentation of arterial vessel wall motion to sub-pixel resolution using M-mode ultrasound.

    PubMed

    Fancourt, Craig; Azer, Karim; Ramcharan, Sharmilee L; Bunzel, Michelle; Cambell, Barry R; Sachs, Jeffrey R; Walker, Matthew

    2008-01-01

    We describe a method for segmenting arterial vessel wall motion to sub-pixel resolution, using the returns from M-mode ultrasound. The technique involves measuring the spatial offset between all pairs of scans from their cross-correlation, converting the spatial offsets to relative wall motion through a global optimization, and finally translating from relative to absolute wall motion by interpolation over the M-mode image. The resulting detailed wall distension waveform has the potential to enhance existing vascular biomarkers, such as strain and compliance, as well as enable new ones.

  12. An RBF-PSO based approach for modeling prostate cancer

    NASA Astrophysics Data System (ADS)

    Perracchione, Emma; Stura, Ilaria

    2016-06-01

    Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.

  13. Issues in Data Fusion for Satellite Aerosol Measurements for Applications with GIOVANNI System at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Gopalan, Arun; Zubko, Viktor; Leptoukh, Gregory G.

    2008-01-01

    We look at issues, barriers and approaches for Data Fusion of satellite aerosol data as available from the GES DISC GIOVANNI Web Service. Daily Global Maps of AOT from a single satellite sensor alone contain gaps that arise due to various sources (sun glint regions, clouds, orbital swath gaps at low latitudes, bright underlying surfaces etc.). The goal is to develop a fast, accurate and efficient method to improve the spatial coverage of the Daily AOT data to facilitate comparisons with Global Models. Data Fusion may be supplemented by Optimal Interpolation (OI) as needed.

  14. High-Resolution DCE-MRI of the Pituitary Gland Using Radial k-Space Acquisition with Compressed Sensing Reconstruction

    PubMed Central

    Rossi Espagnet, M.C.; Bangiyev, L.; Haber, M.; Block, K.T.; Babb, J.; Ruggiero, V.; Boada, F.; Gonen, O.; Fatterpekar, G.M.

    2015-01-01

    BACKGROUNDANDPURPOSE The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. MATERIALS AND METHODS A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. RESULTS Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P < .005). Time-optimization analysis demonstrated that 120 seconds is ideal for dynamic pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. CONCLUSIONS This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. PMID:25953760

  15. A Critical Comparison of Some Methods for Interpolation of Scattered Data

    DTIC Science & Technology

    1979-12-01

    because faster evaluation of the local interpolants is possible. KAll things considered, the method of choice here seems to be the Modified Quadratic...topography and other irregular surfaces," J. of Geophysical Research 76 ( 1971 ) 1905-1915I’ [23) HARDY, Rolland L. - "Analytical topographic surfaces by

  16. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  17. Efficient and Adaptive Methods for Computing Accurate Potential Surfaces for Quantum Nuclear Effects: Applications to Hydrogen-Transfer Reactions.

    PubMed

    DeGregorio, Nicole; Iyengar, Srinivasan S

    2018-01-09

    We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen-bonded systems is demonstrated here.

  18. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  19. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  20. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  1. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  2. Daily air temperature interpolated at high spatial resolution over a large mountainous region

    USGS Publications Warehouse

    Dodson, R.; Marks, D.

    1997-01-01

    Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.

  3. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  4. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  5. Data assimilation of citizen collected information for real-time flood hazard mapping

    NASA Astrophysics Data System (ADS)

    Sayama, T.; Takara, K. T.

    2017-12-01

    Many studies in data assimilation in hydrology have focused on the integration of satellite remote sensing and in-situ monitoring data into hydrologic or land surface models. For flood predictions also, recent studies have demonstrated to assimilate remotely sensed inundation information with flood inundation models. In actual flood disaster situations, citizen collected information including local reports by residents and rescue teams and more recently tweets via social media also contain valuable information. The main interest of this study is how to effectively use such citizen collected information for real-time flood hazard mapping. Here we propose a new data assimilation technique based on pre-conducted ensemble inundation simulations and update inundation depth distributions sequentially when local data becomes available. The propose method is composed by the following two-steps. The first step is based on weighting average of preliminary ensemble simulations, whose weights are updated by Bayesian approach. The second step is based on an optimal interpolation, where the covariance matrix is calculated from the ensemble simulations. The proposed method was applied to case studies including an actual flood event occurred. It considers two situations with more idealized one by assuming continuous flood inundation depth information is available at multiple locations. The other one, which is more realistic case during such a severe flood disaster, assumes uncertain and non-continuous information is available to be assimilated. The results show that, in the first idealized situation, the large scale inundation during the flooding was estimated reasonably with RMSE < 0.4 m in average. For the second more realistic situation, the error becomes larger (RMSE 0.5 m) and the impact of the optimal interpolation becomes comparatively less effective. Nevertheless, the applications of the proposed data assimilation method demonstrated a high potential of this method for assimilating citizen collected information for real-time flood hazard mapping in the future.

  6. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  7. On the Quality of Velocity Interpolation Schemes for Marker-In-Cell Methods on 3-D Staggered Grids

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Pusok, A. E.; Popov, A.

    2015-12-01

    The marker-in-cell method is generally considered to be a flexible and robust method to model advection of heterogenous non-diffusive properties (i.e. rock type or composition) in geodynamic problems or incompressible Stokes problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an immobile, Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without preserving the zero divergence of the velocity field at the interpolated locations (i.e. non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Jenny et al., 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. Solutions to this problem include: using larger mesh resolutions and/or marker densities, or repeatedly controlling the marker distribution (i.e. inject/delete), but which does not have an established physical background. To remedy this at low computational costs, Jenny et al. (2001) and Meyer and Jenny (2004) proposed a simple, conservative velocity interpolation (CVI) scheme for 2-D staggered grid, while Wang et al. (2015) extended the formulation to 3-D finite element methods. Here, we follow up with these studies and report on the quality of velocity interpolation methods for 2-D and 3-D staggered grids. We adapt the formulations from both Jenny et al. (2001) and Wang et al. (2015) for use on 3-D staggered grids, where the velocity components have different node locations as compared to finite element, where they share the same node location. We test the different interpolation schemes (CVI and non-CVI) in combination with different advection schemes (Euler, RK2 and RK4) and with/out marker control on Stokes problems with strong velocity gradients, which are discretized using a finite difference method. We show that a conservative formulation reduces the dispersion or clustering of markers and that the density of markers remains steady over time without the need of additional marker control. Jenny et al. (2001, J Comp Phys, 166, 218-252 Meyer and Jenny (2004), Proc Appl Math Mech, 4, 466-467 Wang et al. (2015), G3, Vol.16 Funding was provided by the ERC Starting Grant #258830.

  8. Level 4 Global and European Chl-a Daily Analyses for End Users and Data Assimilation in the Frame of the Copernicus-Marine Environment Monitoring Service

    NASA Astrophysics Data System (ADS)

    Saulquin, Bertrand; Gohin, Francis; Garnesson, Philippe; Demaria, Julien; Mangin, Antoine; Fanton d'Andon, Odile

    2016-08-01

    The level-4 daily chl-a products are a combination of a water typed merge of chl-a estimates and an optimal interpolation based on the kriging method with regional anisotropic models [1, 2]. The Level 4 products basically pro- vide a global continuous (cloud free) estimation of the surface chl-a concentration at 4 km resolution over the world and 1 km resolution over the Europe. The level-4 products gather MODIS, MERIS, SeaWiFS, VIIRS and OLCI daily observations from 1998 to now.The Level 4 product avoids end users to consider typical lack of data as observed during cloudy conditions and the historical multiplicity of available algorithms such as involved by case 1 (oligotrophic) and case 2 (turbid) water issues in ocean colour. [3, 4].A total product uncertainty, i.e. a combination of the interpolation and the estimation error, is provided for each daily product. The L4 products are freely distributed in the frame of the Copernicus - Marine environment monitoring service.

  9. Suitability of Spatial Interpolation Techniques in Varying Aquifer Systems of a Basaltic Terrain for Monitoring Groundwater Availability

    NASA Astrophysics Data System (ADS)

    Katpatal, Y. B.; Paranjpe, S. V.; Kadu, M. S.

    2017-12-01

    Geological formations act as aquifer systems and variability in the hydrological properties of aquifers have control over groundwater occurrence and dynamics. To understand the groundwater availability in any terrain, spatial interpolation techniques are widely used. It has been observed that, with varying hydrogeological conditions, even in a geologically homogenous set up, there are large variations in observed groundwater levels. Hence, the accuracy of groundwater estimation depends on the use of appropriate interpretation techniques. The study area of the present study is Venna Basin of Maharashtra State, India which is a basaltic terrain with four different types of basaltic layers laid down horizontally; weathered vesicular basalt, weathered and fractured basalt, highly weathered unclassified basalt and hard massive basalt. The groundwater levels vary with topography as different types of basalts are present at varying depths. The local stratigraphic profiles were generated at different types of basaltic terrains. The present study aims to interpolate the groundwater levels within the basin and to check the co-relation between the estimated and the observed values. The groundwater levels for 125 observation wells situated in these different basaltic terrains for 20 years (1995 - 2015) have been used in the study. The interpolation was carried out in Geographical Information System (GIS) using ordinary kriging and Inverse Distance Weight (IDW) method. A comparative analysis of the interpolated values of groundwater levels is carried out for validating the recorded groundwater level dataset. The results were co-related to various types of basaltic terrains present in basin forming the aquifer systems. Mean Error (ME) and Mean Square Errors (MSE) have been computed and compared. It was observed that within the interpolated values, a good correlation does not exist between the two interpolation methods used. The study concludes that in crystalline basaltic terrain, interpolation methods must be verified with the changes in the geological profiles.

  10. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  11. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  12. Mapping Urban Environmental Noise Using Smartphones.

    PubMed

    Zuo, Jinbo; Xia, Hao; Liu, Shuo; Qiao, Yanyou

    2016-10-13

    Noise mapping is an effective method of visualizing and accessing noise pollution. In this paper, a noise-mapping method based on smartphones to effectively and easily measure environmental noise is proposed. By using this method, a noise map of an entire area can be created using limited measurement data. To achieve the measurement with certain precision, a set of methods was designed to calibrate the smartphones. Measuring noise with mobile phones is different from the traditional static observations. The users may be moving at any time. Therefore, a method of attaching an additional microphone with a windscreen is proposed to reduce the wind effect. However, covering an entire area is impossible. Therefore, an interpolation method is needed to achieve full coverage of the area. To reduce the influence of spatial heterogeneity and improve the precision of noise mapping, a region-based noise-mapping method is proposed in this paper, which is based on the distribution of noise in different region types tagged by volunteers, to interpolate and combine them to create a noise map. To validate the effect of the method, a comparison of the interpolation results was made to analyse our method and the ordinary Kriging method. The result shows that our method is more accurate in reflecting the local distribution of noise and has better interpolation precision. We believe that the proposed noise-mapping method is a feasible and low-cost noise-mapping solution.

  13. Mapping Urban Environmental Noise Using Smartphones

    PubMed Central

    Zuo, Jinbo; Xia, Hao; Liu, Shuo; Qiao, Yanyou

    2016-01-01

    Noise mapping is an effective method of visualizing and accessing noise pollution. In this paper, a noise-mapping method based on smartphones to effectively and easily measure environmental noise is proposed. By using this method, a noise map of an entire area can be created using limited measurement data. To achieve the measurement with certain precision, a set of methods was designed to calibrate the smartphones. Measuring noise with mobile phones is different from the traditional static observations. The users may be moving at any time. Therefore, a method of attaching an additional microphone with a windscreen is proposed to reduce the wind effect. However, covering an entire area is impossible. Therefore, an interpolation method is needed to achieve full coverage of the area. To reduce the influence of spatial heterogeneity and improve the precision of noise mapping, a region-based noise-mapping method is proposed in this paper, which is based on the distribution of noise in different region types tagged by volunteers, to interpolate and combine them to create a noise map. To validate the effect of the method, a comparison of the interpolation results was made to analyse our method and the ordinary Kriging method. The result shows that our method is more accurate in reflecting the local distribution of noise and has better interpolation precision. We believe that the proposed noise-mapping method is a feasible and low-cost noise-mapping solution. PMID:27754359

  14. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  15. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    NASA Astrophysics Data System (ADS)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  16. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  17. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  18. Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Niu, Xiamu

    2016-01-01

    This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.

  19. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-01

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  20. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.

    PubMed

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-21

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  1. Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons

    PubMed Central

    Cemgil, Ali Taylan

    2017-01-01

    We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375

  2. Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.

    PubMed

    Daniş, F Serhan; Cemgil, Ali Taylan

    2017-10-29

    We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.

  3. Neural networks applications to control and computations

    NASA Technical Reports Server (NTRS)

    Luxemburg, Leon A.

    1994-01-01

    Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

  4. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  5. Systematic Interpolation Method Predicts Antibody Monomer-Dimer Separation by Gradient Elution Chromatography at High Protein Loads.

    PubMed

    Creasy, Arch; Reck, Jason; Pabst, Timothy; Hunter, Alan; Barker, Gregory; Carta, Giorgio

    2018-05-29

    A previously developed empirical interpolation (EI) method is extended to predict highly overloaded multicomponent elution behavior on a cation exchange (CEX) column based on batch isotherm data. Instead of a fully mechanistic model, the EI method employs an empirically modified multicomponent Langmuir equation to correlate two-component adsorption isotherm data at different salt concentrations. Piecewise cubic interpolating polynomials are then used to predict competitive binding at intermediate salt concentrations. The approach is tested for the separation of monoclonal antibody monomer and dimer mixtures by gradient elution on the cation exchange resin Nuvia HR-S. Adsorption isotherms are obtained over a range of salt concentrations with varying monomer and dimer concentrations. Coupled with a lumped kinetic model, the interpolated isotherms predict the column behavior for highly overloaded conditions. Predictions based on the EI method showed good agreement with experimental elution curves for protein loads up to 40 mg/mL column or about 50% of the column binding capacity. The approach can be extended to other chromatographic modalities and to more than two components. This article is protected by copyright. All rights reserved.

  6. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  7. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  8. Bayer Demosaicking with Polynomial Interpolation.

    PubMed

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  9. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  10. Spatiotemporal Interpolation Methods for the Application of Estimating Population Exposure to Fine Particulate Matter in the Contiguous U.S. and a Real-Time Web Application.

    PubMed

    Li, Lixin; Zhou, Xiaolu; Kalo, Marc; Piltner, Reinhard

    2016-07-25

    Appropriate spatiotemporal interpolation is critical to the assessment of relationships between environmental exposures and health outcomes. A powerful assessment of human exposure to environmental agents would incorporate spatial and temporal dimensions simultaneously. This paper compares shape function (SF)-based and inverse distance weighting (IDW)-based spatiotemporal interpolation methods on a data set of PM2.5 data in the contiguous U.S. Particle pollution, also known as particulate matter (PM), is composed of microscopic solids or liquid droplets that are so small that they can get deep into the lungs and cause serious health problems. PM2.5 refers to particles with a mean aerodynamic diameter less than or equal to 2.5 micrometers. Based on the error statistics results of k-fold cross validation, the SF-based method performed better overall than the IDW-based method. The interpolation results generated by the SF-based method are combined with population data to estimate the population exposure to PM2.5 in the contiguous U.S. We investigated the seasonal variations, identified areas where annual and daily PM2.5 were above the standards, and calculated the population size in these areas. Finally, a web application is developed to interpolate and visualize in real time the spatiotemporal variation of ambient air pollution across the contiguous U.S. using air pollution data from the U.S. Environmental Protection Agency (EPA)'s AirNow program.

  11. Spatiotemporal Interpolation Methods for the Application of Estimating Population Exposure to Fine Particulate Matter in the Contiguous U.S. and a Real-Time Web Application

    PubMed Central

    Li, Lixin; Zhou, Xiaolu; Kalo, Marc; Piltner, Reinhard

    2016-01-01

    Appropriate spatiotemporal interpolation is critical to the assessment of relationships between environmental exposures and health outcomes. A powerful assessment of human exposure to environmental agents would incorporate spatial and temporal dimensions simultaneously. This paper compares shape function (SF)-based and inverse distance weighting (IDW)-based spatiotemporal interpolation methods on a data set of PM2.5 data in the contiguous U.S. Particle pollution, also known as particulate matter (PM), is composed of microscopic solids or liquid droplets that are so small that they can get deep into the lungs and cause serious health problems. PM2.5 refers to particles with a mean aerodynamic diameter less than or equal to 2.5 micrometers. Based on the error statistics results of k-fold cross validation, the SF-based method performed better overall than the IDW-based method. The interpolation results generated by the SF-based method are combined with population data to estimate the population exposure to PM2.5 in the contiguous U.S. We investigated the seasonal variations, identified areas where annual and daily PM2.5 were above the standards, and calculated the population size in these areas. Finally, a web application is developed to interpolate and visualize in real time the spatiotemporal variation of ambient air pollution across the contiguous U.S. using air pollution data from the U.S. Environmental Protection Agency (EPA)’s AirNow program. PMID:27463722

  12. On piecewise interpolation techniques for estimating solar radiation missing values in Kedah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu

    2014-12-04

    This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less

  13. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  14. Surface temperature dataset for North America obtained by application of optimal interpolation algorithm merging tree-ring chronologies and climate model output

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2017-02-01

    A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.

  15. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  16. Spectral iterative method and convergence analysis for solving nonlinear fractional differential equation

    NASA Astrophysics Data System (ADS)

    Yarmohammadi, M.; Javadi, S.; Babolian, E.

    2018-04-01

    In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.

  17. Comparison of Benchtop Fourier-Transform (FT) and Portable Grating Scanning Spectrometers for Determination of Total Soluble Solid Contents in Single Grape Berry (Vitis vinifera L.) and Calibration Transfer.

    PubMed

    Xiao, Hui; Sun, Ke; Sun, Ye; Wei, Kangli; Tu, Kang; Pan, Leiqing

    2017-11-22

    Near-infrared (NIR) spectroscopy was applied for the determination of total soluble solid contents (SSC) of single Ruby Seedless grape berries using both benchtop Fourier transform (VECTOR 22/N) and portable grating scanning (SupNIR-1500) spectrometers in this study. The results showed that the best SSC prediction was obtained by VECTOR 22/N in the range of 12,000 to 4000 cm -1 (833-2500 nm) for Ruby Seedless with determination coefficient of prediction (R p ²) of 0.918, root mean squares error of prediction (RMSEP) of 0.758% based on least squares support vector machine (LS-SVM). Calibration transfer was conducted on the same spectral range of two instruments (1000-1800 nm) based on the LS-SVM model. By conducting Kennard-Stone (KS) to divide sample sets, selecting the optimal number of standardization samples and applying Passing-Bablok regression to choose the optimal instrument as the master instrument, a modified calibration transfer method between two spectrometers was developed. When 45 samples were selected for the standardization set, the linear interpolation-piecewise direct standardization (linear interpolation-PDS) performed well for calibration transfer with R p ² of 0.857 and RMSEP of 1.099% in the spectral region of 1000-1800 nm. And it was proved that re-calculating the standardization samples into master model could improve the performance of calibration transfer in this study. This work indicated that NIR could be used as a rapid and non-destructive method for SSC prediction, and provided a feasibility to solve the transfer difficulty between totally different NIR spectrometers.

  18. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  19. Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging

    NASA Astrophysics Data System (ADS)

    Wang, Zong; Shi, Wenjiao

    2017-03-01

    Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.

  20. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  1. Traffic volume estimation using network interpolation techniques.

    DOT National Transportation Integrated Search

    2013-12-01

    Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...

  2. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  3. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  4. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  5. A hyperspectral image optimizing method based on sub-pixel MTF analysis

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie

    2015-04-01

    Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.

  6. A framework to determine the locations of the environmental monitoring in an estuary of the Yellow Sea.

    PubMed

    Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong

    2018-06-04

    The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data

    NASA Astrophysics Data System (ADS)

    Ye, Shirong; Xia, Pengfei; Cai, Changsheng

    2016-09-01

    The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD) models are improved and thus the zenith wet delay (ZWD) can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV) is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.

  8. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  9. Systematic design of 3D auxetic lattice materials with programmable Poisson's ratio for finite strains

    NASA Astrophysics Data System (ADS)

    Wang, Fengwen

    2018-05-01

    This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.

  10. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  11. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    USGS Publications Warehouse

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.

  12. Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis

    PubMed Central

    Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.

    2016-01-01

    Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888

  13. Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Punnoose, J.; Xu, J.; Sisniega, A.

    2016-08-15

    Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins overmore » beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.« less

  14. Interpolation Hermite Polynomials For Finite Element Method

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given.

  15. Potentials Unbounded Below

    NASA Astrophysics Data System (ADS)

    Curtright, Thomas

    2011-04-01

    Continuous interpolates are described for classical dynamical systems defined by discrete time-steps. Functional conjugation methods play a central role in obtaining the interpolations. The interpolates correspond to particle motion in an underlying potential, V. Typically, V has no lower bound and can exhibit switchbacks wherein V changes form when turning points are encountered by the particle. The Beverton-Holt and Skellam models of population dynamics, and particular cases of the logistic map are used to illustrate these features.

  16. An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-06-01

    Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory

  17. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  18. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  19. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jin; Nelson, Karl E.

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  20. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE PAGES

    Yao, Jin; Nelson, Karl E.

    2018-01-24

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  1. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  2. Reliability of the Parabola Approximation Method in Heart Rate Variability Analysis Using Low-Sampling-Rate Photoplethysmography.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol

    2017-10-24

    Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.

  3. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  4. Tomography for two-dimensional gas temperature distribution based on TDLAS

    NASA Astrophysics Data System (ADS)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  5. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  6. The Coplane Analysis Technique for Three-Dimensional Wind Retrieval Using the HIWRAP Airborne Doppler Radar

    NASA Technical Reports Server (NTRS)

    Didlake, Anthony C., Jr.; Heymsfield, Gerald M.; Tian, Lin; Guimond, Stephen R.

    2015-01-01

    The coplane analysis technique for mapping the three-dimensional wind field of precipitating systems is applied to the NASA High Altitude Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a dual-frequency Doppler radar system with two downward pointing and conically scanning beams. The coplane technique interpolates radar measurements to a natural coordinate frame, directly solves for two wind components, and integrates the mass continuity equation to retrieve the unobserved third wind component. This technique is tested using a model simulation of a hurricane and compared to a global optimization retrieval. The coplane method produced lower errors for the cross-track and vertical wind components, while the global optimization method produced lower errors for the along-track wind component. Cross-track and vertical wind errors were dependent upon the accuracy of the estimated boundary condition winds near the surface and at nadir, which were derived by making certain assumptions about the vertical velocity field. The coplane technique was then applied successfully to HIWRAP observations of Hurricane Ingrid (2013). Unlike the global optimization method, the coplane analysis allows for a transparent connection between the radar observations and specific analysis results. With this ability, small-scale features can be analyzed more adequately and erroneous radar measurements can be identified more easily.

  7. Tri-linear interpolation-based cerebral white matter fiber imaging

    PubMed Central

    Jiang, Shan; Zhang, Pengfei; Han, Tong; Liu, Weihua; Liu, Meixia

    2013-01-01

    Diffusion tensor imaging is a unique method to visualize white matter fibers three-dimensionally, non-invasively and in vivo, and therefore it is an important tool for observing and researching neural regeneration. Different diffusion tensor imaging-based fiber tracking methods have been already investigated, but making the computing faster, fiber tracking longer and smoother and the details shown clearer are needed to be improved for clinical applications. This study proposed a new fiber tracking strategy based on tri-linear interpolation. We selected a patient with acute infarction of the right basal ganglia and designed experiments based on either the tri-linear interpolation algorithm or tensorline algorithm. Fiber tracking in the same regions of interest (genu of the corpus callosum) was performed separately. The validity of the tri-linear interpolation algorithm was verified by quantitative analysis, and its feasibility in clinical diagnosis was confirmed by the contrast between tracking results and the disease condition of the patient as well as the actual brain anatomy. Statistical results showed that the maximum length and average length of the white matter fibers tracked by the tri-linear interpolation algorithm were significantly longer. The tracking images of the fibers indicated that this method can obtain smoother tracked fibers, more obvious orientation and clearer details. Tracking fiber abnormalities are in good agreement with the actual condition of patients, and tracking displayed fibers that passed though the corpus callosum, which was consistent with the anatomical structures of the brain. Therefore, the tri-linear interpolation algorithm can achieve a clear, anatomically correct and reliable tracking result. PMID:25206524

  8. Development and validation of segmentation and interpolation techniques in sinograms for metal artifact suppression in CT.

    PubMed

    Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob

    2010-02-01

    Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.

  9. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  10. Structural Optimization of Triboelectric Nanogenerator for Harvesting Water Wave Energy.

    PubMed

    Jiang, Tao; Zhang, Li Min; Chen, Xiangyu; Han, Chang Bao; Tang, Wei; Zhang, Chi; Xu, Liang; Wang, Zhong Lin

    2015-12-22

    Ocean waves are one of the most abundant energy sources on earth, but harvesting such energy is rather challenging due to various limitations of current technologies. Recently, networks formed by triboelectric nanogenerator (TENG) have been proposed as a promising technology for harvesting water wave energy. In this work, a basic unit for the TENG network was studied and optimized, which has a box structure composed of walls made of TENG composed of a wavy-structured Cu-Kapton-Cu film and two FEP thin films, with a metal ball enclosed inside. By combination of the theoretical calculations and experimental studies, the output performances of the TENG unit were investigated for various structural parameters, such as the size, mass, or number of the metal balls. From the viewpoint of theory, the output characteristics of TENG during its collision with the ball were numerically calculated by the finite element method and interpolation method, and there exists an optimum ball size or mass to reach maximized output power and electric energy. Moreover, the theoretical results were well verified by the experimental tests. The present work could provide guidance for structural optimization of wavy-structured TENGs for effectively harvesting water wave energy toward the dream of large-scale blue energy.

  11. A practical implementation of wave front construction for 3-D isotropic media

    NASA Astrophysics Data System (ADS)

    Chambers, K.; Kendall, J.-M.

    2008-06-01

    Wave front construction (WFC) methods are a useful tool for tracking wave fronts and are a natural extension to standard ray shooting methods. Here we describe and implement a simple WFC method that is used to interpolate wavefield properties throughout a 3-D heterogeneous medium. Our approach differs from previous 3-D WFC procedures primarily in the use of a ray interpolation scheme, based on approximating the wave front as a `locally spherical' surface and a `first arrival mode', which reduces computation times, where only first arrivals are required. Both of these features have previously been included in 2-D WFC algorithms; however, until now they have not been extended to 3-D systems. The wave front interpolation scheme allows for rays to be traced from a nearly arbitrary distribution of take-off angles, and the calculation of derivatives with respect to take-off angles is not required for wave front interpolation. However, in regions of steep velocity gradient, the locally spherical approximation is not valid, and it is necessary to backpropagate rays to a sufficiently homogenous region before interpolation of the new ray. Our WFC technique is illustrated using a realistic velocity model, based on a North Sea oil reservoir. We examine wavefield quantities such as traveltimes, ray angles, source take-off angles and geometrical spreading factors, all of which are interpolated on to a regular grid. We compare geometrical spreading factors calculated using two methods: using the ray Jacobian and by taking the ratio of a triangular area of wave front to the corresponding solid angle at the source. The results show that care must be taken when using ray Jacobians to calculate geometrical spreading factors, as the poles of the source coordinate system produce unreliable values, which can be spread over a large area, as only a few initial rays are traced in WFC. We also show that the use of the first arrival mode can reduce computation time by ~65 per cent, with the accuracy of the interpolated traveltimes, ray angles and source take-off angles largely unchanged. However, the first arrival mode does lead to inaccuracies in interpolated angles near caustic surfaces, as well as small variations in geometrical spreading factors for ray tubes that have passed through caustic surfaces.

  12. A finite difference Davidson procedure to sidestep full ab initio hessian calculation: Application to characterization of stationary points and transition state searches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharada, Shaama Mallikarjun; Bell, Alexis T., E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu; Head-Gordon, Martin, E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu

    2014-04-28

    The cost of calculating nuclear hessians, either analytically or by finite difference methods, during the course of quantum chemical analyses can be prohibitive for systems containing hundreds of atoms. In many applications, though, only a few eigenvalues and eigenvectors, and not the full hessian, are required. For instance, the lowest one or two eigenvalues of the full hessian are sufficient to characterize a stationary point as a minimum or a transition state (TS), respectively. We describe here a method that can eliminate the need for hessian calculations for both the characterization of stationary points as well as searches for saddlemore » points. A finite differences implementation of the Davidson method that uses only first derivatives of the energy to calculate the lowest eigenvalues and eigenvectors of the hessian is discussed. This method can be implemented in conjunction with geometry optimization methods such as partitioned-rational function optimization (P-RFO) to characterize stationary points on the potential energy surface. With equal ease, it can be combined with interpolation methods that determine TS guess structures, such as the freezing string method, to generate approximate hessian matrices in lieu of full hessians as input to P-RFO for TS optimization. This approach is shown to achieve significant cost savings relative to exact hessian calculation when applied to both stationary point characterization as well as TS optimization. The basic reason is that the present approach scales one power of system size lower since the rate of convergence is approximately independent of the size of the system. Therefore, the finite-difference Davidson method is a viable alternative to full hessian calculation for stationary point characterization and TS search particularly when analytical hessians are not available or require substantial computational effort.« less

  13. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  14. River bathymetry estimation based on the floodplains topography.

    NASA Astrophysics Data System (ADS)

    Bureš, Luděk; Máca, Petr; Roub, Radek; Pech, Pavel; Hejduk, Tomáš; Novák, Pavel

    2017-04-01

    Topographic model including River bathymetry (bed topography) is required for hydrodynamic simulation, water quality modelling, flood inundation mapping, sediment transport, ecological and geomorphologic assessments. The most common way to create the river bathymetry is to use of the spatial interpolation of discrete points or cross sections data. The quality of the generated bathymetry is dependent on the quality of the measurements, on the used technology and on the size of input dataset. Extensive measurements are often time consuming and expensive. Other option for creating of the river bathymetry is to use the methods of mathematical modelling. In the presented contribution we created the river bathymetry model. Model is based on the analytical curves. The curves are bent into shape of the cross sections. For the best description of the river bathymetry we need to know the values of the model parameters. For finding these parameters we use of the global optimization methods. The global optimization schemes is based on heuristics inspired by the natural processes. We use new type of DE (differential evolution) for finding the solutions of inverse problems, related to the parameters of mathematical model of river bed surfaces. The presented analysis discuss the dependence of model parameters on the selected characteristics. Selected characteristics are: (1) Topographic characteristics (slope and curvature in the left and right floodplains) determined on the base of DTM 5G (digital terrain model). (2) Optimization scheme. (3) Type of used analytical curves. The novel approach is applied on the three parts of Vltava river in Czech Republic. Each part of the river is described on the base of the point field. The point fields was measured with ADCP probe River surveyor M9. This work was supported by the Technology Agency of the Czech Republic, programme Alpha (project TA04020042 - New technologies bathymetry of rivers and reservoirs to determine their storage capacity and monitor the amount and dynamics of sediments) and Internal Grant Agency of Faculty of Environmental Sciences (CULS) (IGA/20164233). Keywords: bathymetry, global optimization, bed topography References: Merwade, Venkatesh. "Effect of spatial trends on interpolation of river bathymetry." Journal of Hydrology, 371.1, 169-181, 2009. Legleiter, Carl J., and Phaedon C. Kyriakidis. Spatial prediction of river channel topography by kriging. Earth Surface Processes and Landforms, 33.6 , 841-867, 2008. P. Maca and P. Pech and and J. Pavlasek. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast. Mathematical Problems in Engineering, vol. 2014, Article ID 782351, 10 pages, 2014. M. Jakubcova and P. Maca and and P. Pech. A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm. Journal of Applied Mathematics, vol. 2014, Article ID 293087, 10 pages, 2014.

  15. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  16. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  17. Merging Multi-model CMIP5/PMIP3 Past-1000 Ensemble Simulations with Tree Ring Proxy Data by Optimal Interpolation Approach

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Luo, Yong; Xing, Pei; Nie, Suping; Tian, Qinhua

    2015-04-01

    Two sets of gridded annual mean surface air temperature in past millennia over the Northern Hemisphere was constructed employing optimal interpolation (OI) method so as to merge the tree ring proxy records with the simulations from CMIP5 (the fifth phase of the Climate Model Intercomparison Project). Both the uncertainties in proxy reconstruction and model simulations can be taken into account applying OI algorithm. For better preservation of physical coordinated features and spatial-temporal completeness of climate variability in 7 copies of model results, we perform the Empirical Orthogonal Functions (EOF) analysis to truncate the ensemble mean field as the first guess (background field) for OI. 681 temperature sensitive tree-ring chronologies are collected and screened from International Tree Ring Data Bank (ITRDB) and Past Global Changes (PAGES-2k) project. Firstly, two methods (variance matching and linear regression) are employed to calibrate the tree ring chronologies with instrumental data (CRUTEM4v) individually. In addition, we also remove the bias of both the background field and proxy records relative to instrumental dataset. Secondly, time-varying background error covariance matrix (B) and static "observation" error covariance matrix (R) are calculated for OI frame. In our scheme, matrix B was calculated locally, and "observation" error covariance are partially considered in R matrix (the covariance value between the pairs of tree ring sites that are very close to each other would be counted), which is different from the traditional assumption that R matrix should be diagonal. Comparing our results, it turns out that regional averaged series are not sensitive to the selection for calibration methods. The Quantile-Quantile plots indicate regional climatologies based on both methods are tend to be more agreeable with regional reconstruction of PAGES-2k in 20th century warming period than in little ice age (LIA). Lager volcanic cooling response over Asia and Europe in context of recent millennium are detected in our datasets than that revealed in regional reconstruction from PAGES-2k network. Verification experiments have showed that the merging approach really reconcile the proxy data and model ensemble simulations in an optimal way (with smaller errors than both of them). Further research is needed to improve the error estimation on them.

  18. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  19. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  20. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  1. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  2. Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.

    PubMed

    Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming

    2018-01-01

    Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.

  3. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  4. Interlaminar Stresses by Refined Beam Theories and the Sinc Method Based on Interpolation of Highest Derivative

    NASA Technical Reports Server (NTRS)

    Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander

    2010-01-01

    Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.

  5. Geostatistical interpolation of hourly precipitation from rain gauges and radar for a large-scale extreme rainfall event

    NASA Astrophysics Data System (ADS)

    Haberlandt, Uwe

    2007-01-01

    SummaryThe methods kriging with external drift (KED) and indicator kriging with external drift (IKED) are used for the spatial interpolation of hourly rainfall from rain gauges using additional information from radar, daily precipitation of a denser network, and elevation. The techniques are illustrated using data from the storm period of the 10th to the 13th of August 2002 that led to the extreme flood event in the Elbe river basin in Germany. Cross-validation is applied to compare the interpolation performance of the KED and IKED methods using different additional information with the univariate reference methods nearest neighbour (NN) or Thiessen polygons, inverse square distance weighting (IDW), ordinary kriging (OK) and ordinary indicator kriging (IK). Special attention is given to the analysis of the impact of the semivariogram estimation on the interpolation performance. Hourly and average semivariograms are inferred from daily, hourly and radar data considering either isotropic or anisotropic behaviour using automatic and manual fitting procedures. The multivariate methods KED and IKED clearly outperform the univariate ones with the most important additional information being radar, followed by precipitation from the daily network and elevation, which plays only a secondary role here. The best performance is achieved when all additional information are used simultaneously with KED. The indicator-based kriging methods provide, in some cases, smaller root mean square errors than the methods, which use the original data, but at the expense of a significant loss of variance. The impact of the semivariogram on interpolation performance is not very high. The best results are obtained using an automatic fitting procedure with isotropic variograms either from hourly or radar data.

  6. Restoring the missing features of the corrupted speech using linear interpolation methods

    NASA Astrophysics Data System (ADS)

    Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.

    2017-10-01

    One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.

  7. Spatial interpolation of river channel topography using the shortest temporal distance

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Xian, Cuiling; Chen, Huajin; Grieneisen, Michael L.; Liu, Jiaming; Zhang, Minghua

    2016-11-01

    It is difficult to interpolate river channel topography due to complex anisotropy. As the anisotropy is often caused by river flow, especially the hydrodynamic and transport mechanisms, it is reasonable to incorporate flow velocity into topography interpolator for decreasing the effect of anisotropy. In this study, two new distance metrics defined as the time taken by water flow to travel between two locations are developed, and replace the spatial distance metric or Euclidean distance that is currently used to interpolate topography. One is a shortest temporal distance (STD) metric. The temporal distance (TD) of a path between two nodes is calculated by spatial distance divided by the tangent component of flow velocity along the path, and the STD is searched using the Dijkstra algorithm in all possible paths between two nodes. The other is a modified shortest temporal distance (MSTD) metric in which both the tangent and normal components of flow velocity were combined. They are used to construct the methods for the interpolation of river channel topography. The proposed methods are used to generate the topography of Wuhan Section of Changjiang River and compared with Universal Kriging (UK) and Inverse Distance Weighting (IDW). The results clearly showed that the STD and MSTD based on flow velocity were reliable spatial interpolators. The MSTD, followed by the STD, presents improvement in prediction accuracy relative to both UK and IDW.

  8. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  9. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  10. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    NASA Astrophysics Data System (ADS)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.

  11. Specifying the Probability Characteristics of Funnel Plot Control Limits: An Investigation of Three Approaches

    PubMed Central

    Manktelow, Bradley N.; Seaton, Sarah E.

    2012-01-01

    Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which interpolation method is to be used prior to any analysis. PMID:23029202

  12. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  13. Information geometry and its application to theoretical statistics and diffusion tensor magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wisniewski, Nicholas Andrew

    This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.

  14. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    NASA Astrophysics Data System (ADS)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  15. Analysis of spatial distribution of land cover maps accuracy

    NASA Astrophysics Data System (ADS)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.

  16. Interpolation algorithm for asynchronous ADC-data

    NASA Astrophysics Data System (ADS)

    Bramburger, Stefan; Zinke, Benny; Killat, Dirk

    2017-09-01

    This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  17. Learning receptor positions from imperfectly known motions

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1990-01-01

    An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.

  18. Rtop - an R package for interpolation of data with a variable spatial support - examples from river networks

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Laaha, Gregor; Koffler, Daniel; Blöschl, Günter; Pebesma, Edzer; Parajka, Juraj; Viglione, Alberto

    2013-04-01

    Geostatistical methods have been applied only to a limited extent for spatial interpolation in applications where the observations have an irregular support, such as runoff characteristics or population health data. Several studies have shown the potential of such methods (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006, Goovaerts 2008), but these developments have so far not led to easily accessible, versatile, easy to apply and open source software. Based on the top-kriging approach suggested by Skøien et al. (2006), we will here present the package rtop, which has been implemented in the statistical environment R (R Core Team 2012). Taking advantage of the existing methods in R for analysis of spatial objects (Bivand et al. 2008), and the extensive possibilities for visualizing the results, rtop makes it easy to apply geostatistical interpolation methods when observations have a non-point spatial support. Although the package is flexible regarding data input, the main application so far has been for interpolation along river networks. We will present some examples showing how the package can easily be used for such interpolation. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Bivand, R.S., Pebesma, E.J. & Gómez-Rubio, V., 2008. Applied spatial data analysis with r: Springer. Goovaerts, P., 2008. Kriging and semivariogram deconvolution in the presence of irregular geographical units. Mathematical Geosciences, 40 (1), 101-128. Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Core Team, 2012. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.

  19. Implementation of higher-order vertical finite elements in ISSM v4.13 for improved ice sheet flow modeling over paleoclimate timescales

    NASA Astrophysics Data System (ADS)

    Cuzzone, Joshua K.; Morlighem, Mathieu; Larour, Eric; Schlegel, Nicole; Seroussi, Helene

    2018-05-01

    Paleoclimate proxies are being used in conjunction with ice sheet modeling experiments to determine how the Greenland ice sheet responded to past changes, particularly during the last deglaciation. Although these comparisons have been a critical component in our understanding of the Greenland ice sheet sensitivity to past warming, they often rely on modeling experiments that favor minimizing computational expense over increased model physics. Over Paleoclimate timescales, simulating the thermal structure of the ice sheet has large implications on the modeled ice viscosity, which can feedback onto the basal sliding and ice flow. To accurately capture the thermal field, models often require a high number of vertical layers. This is not the case for the stress balance computation, however, where a high vertical resolution is not necessary. Consequently, since stress balance and thermal equations are generally performed on the same mesh, more time is spent on the stress balance computation than is otherwise necessary. For these reasons, running a higher-order ice sheet model (e.g., Blatter-Pattyn) over timescales equivalent to the paleoclimate record has not been possible without incurring a large computational expense. To mitigate this issue, we propose a method that can be implemented within ice sheet models, whereby the vertical interpolation along the z axis relies on higher-order polynomials, rather than the traditional linear interpolation. This method is tested within the Ice Sheet System Model (ISSM) using quadratic and cubic finite elements for the vertical interpolation on an idealized case and a realistic Greenland configuration. A transient experiment for the ice thickness evolution of a single-dome ice sheet demonstrates improved accuracy using the higher-order vertical interpolation compared to models using the linear vertical interpolation, despite having fewer degrees of freedom. This method is also shown to improve a model's ability to capture sharp thermal gradients in an ice sheet particularly close to the bed, when compared to models using a linear vertical interpolation. This is corroborated in a thermal steady-state simulation of the Greenland ice sheet using a higher-order model. In general, we find that using a higher-order vertical interpolation decreases the need for a high number of vertical layers, while dramatically reducing model runtime for transient simulations. Results indicate that when using a higher-order vertical interpolation, runtimes for a transient ice sheet relaxation are upwards of 5 to 7 times faster than using a model which has a linear vertical interpolation, and this thus requires a higher number of vertical layers to achieve a similar result in simulated ice volume, basal temperature, and ice divide thickness. The findings suggest that this method will allow higher-order models to be used in studies investigating ice sheet behavior over paleoclimate timescales at a fraction of the computational cost than would otherwise be needed for a model using a linear vertical interpolation.

  20. Use of loading-unloading compression curves in medical device design

    NASA Astrophysics Data System (ADS)

    Ciornei, M. C.; Alaci, S.; Ciornei, F. C.; Romanu, I. C.

    2017-08-01

    The paper presents a method and experimental results regarding mechanical testing of soft materials. In order to characterize the mechanical behaviour of technological materials used in prosthesis, a large number of material constants are required, as well as the comparison to the original. The present paper proposes as methodology the comparison between compression loading-unloading curves corresponding to a soft biological tissue and to a synthetic material. To this purpose, a device was designed based on the principle of the dynamic harness test. A moving load is considered and the force upon the indenter is controlled for loading-unloading phases. The load and specimen deformation are simultaneously recorded. A significant contribution of this paper is the interpolation of experimental data by power law functions, a difficult task because of the instability of the system of equations to be optimized. Finding the interpolation function was simplified, from solving a system of transcendental equations to solving a unique equation. The characteristic parameters of the experimentally curves must be compared to the ones corresponding to actual tissue. The tests were performed for two cases: first, using a spherical punch, and second, for a flat-ended cylindrical punch.

  1. Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya

    2003-01-01

    The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.

  2. Super-resolution convolutional neural network for the improvement of the image quality of magnified images in chest radiographs

    NASA Astrophysics Data System (ADS)

    Umehara, Kensuke; Ota, Junko; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki

    2017-02-01

    Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247 chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.

  3. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.

    PubMed

    Han, Dustin T; Suhail, Mohamed; Ragan, Eric D

    2018-04-01

    Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

  4. Feasibility study on a strain based deflection monitoring system for wind turbine blades

    NASA Astrophysics Data System (ADS)

    Lee, Kyunghyun; Aihara, Aya; Puntsagdash, Ganbayar; Kawaguchi, Takayuki; Sakamoto, Hiraku; Okuma, Masaaki

    2017-01-01

    The bending stiffness of the wind turbine blades has decreased due to the trend of wind turbine upsizing. Consequently, the risk of blades breakage by hitting the tower has increased. In order to prevent such incidents, this study proposes a deflection monitoring system that can be installed to already operating wind turbine's blades. The monitoring system is composed of an estimation algorithm to detect blade deflection and a wireless sensor network as a hardware equipment. As for the estimation method for blade deflection, a strain-based estimation algorithm and an objective function for optimal sensor arrangement are proposed. Strain-based estimation algorithm is using a linear correlation between strain and deflections, which can be expressed in a form of a transformation matrix. The objective function includes the terms of strain sensitivity and condition number of the transformation matrix between strain and deflection. In order to calculate the objective function, a simplified experimental model of the blade is constructed by interpolating the mode shape of a blade from modal testing. The interpolation method is effective considering a practical use to operating wind turbines' blades since it is not necessary to establish a finite element model of a blade. On the other hand, a sensor network with wireless connection with an open source hardware is developed. It is installed to a 300 W scale wind turbine and vibration of the blade on operation is investigated.

  5. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  6. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  7. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.

  8. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424

  9. Importance of interpolation and coincidence errors in data fusion

    NASA Astrophysics Data System (ADS)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  10. A Comparison of Spatial Analysis Methods for the Construction of Topographic Maps of Retinal Cell Density

    PubMed Central

    Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568

  11. A Residual Kriging method for the reconstruction of 3D high-resolution meteorological fields from airborne and surface observations

    NASA Astrophysics Data System (ADS)

    Laiti, Lavinia; Zardi, Dino; de Franceschi, Massimiliano; Rampanelli, Gabriele

    2013-04-01

    Manned light aircrafts and remotely piloted aircrafts represent very valuable and flexible measurement platforms for atmospheric research, as they are able to provide high temporal and spatial resolution observations of the atmosphere above the ground surface. In the present study the application of a geostatistical interpolation technique called Residual Kriging (RK) is proposed for the mapping of airborne measurements of scalar quantities over regularly spaced 3D grids. In RK the dominant (vertical) trend component underlying the original data is first extracted to filter out local anomalies, then the residual field is separately interpolated and finally added back to the trend; the determination of the interpolation weights relies on the estimate of the characteristic covariance function of the residuals, through the computation and modelling of their semivariogram function. RK implementation also allows for the inference of the characteristic spatial scales of variability of the target field and its isotropization, and for an estimate of the interpolation error. The adopted test-bed database consists in a series of flights of an instrumented motorglider exploring the atmosphere of two valleys near the city of Trento (in the southeastern Italian Alps), performed on fair-weather summer days. RK method is used to reconstruct fully 3D high-resolution fields of potential temperature and mixing ratio for specific vertical slices of the valley atmosphere, integrating also ground-based measurements from the nearest surface weather stations. From RK-interpolated meteorological fields, fine-scale features of the atmospheric boundary layer developing over the complex valley topography in connection with the occurrence of thermally-driven slope and valley winds, are detected. The performance of RK mapping is also tested against two other commonly adopted interpolation methods, i.e. the Inverse Distance Weighting and the Delaunay triangulation methods, comparing the results of a cross-validation procedure.

  12. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  13. A novel interpolation approach for the generation of 3D-geometric digital bone models from image stacks

    PubMed Central

    Mittag, U.; Kriechbaumer, A.; Rittweger, J.

    2017-01-01

    The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415

  14. INTERPOL's Surveillance Network in Curbing Transnational Terrorism

    PubMed Central

    Gardeazabal, Javier; Sandler, Todd

    2015-01-01

    Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.

  15. Spatial Interpolation of Fine Particulate Matter Concentrations Using the Shortest Wind-Field Path Distance

    PubMed Central

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197

  16. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

    PubMed

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

  17. [A correction method of baseline drift of discrete spectrum of NIR].

    PubMed

    Hu, Ai-Qin; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu

    2014-10-01

    In the present paper, a new correction method of baseline drift of discrete spectrum is proposed by combination of cubic spline interpolation and first order derivative. A fitting spectrum is constructed by cubic spline interpolation, using the datum in discrete spectrum as interpolation nodes. The fitting spectrum is differentiable. First order derivative is applied to the fitting spectrum to calculate derivative spectrum. The spectral wavelengths which are the same as the original discrete spectrum were taken out from the derivative spectrum to constitute the first derivative spectra of the discrete spectra, thereby to correct the baseline drift of the discrete spectra. The effects of the new method were demonstrated by comparison of the performances of multivariate models built using original spectra, direct differential spectra and the spectra pretreated by the new method. The results show that negative effects on the performance of multivariate model caused by baseline drift of discrete spectra can be effectively eliminated by the new method.

  18. Elastic-wave-mode separation in TTI media with inverse-distance weighted interpolation involving position shading

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu

    2017-10-01

    The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.

  19. A novel method for interactive multi-objective dose-guided patient positioning

    NASA Astrophysics Data System (ADS)

    Haehnle, Jonas; Süss, Philipp; Landry, Guillaume; Teichert, Katrin; Hille, Lucas; Hofmaier, Jan; Nowak, Dimitri; Kamp, Florian; Reiner, Michael; Thieke, Christian; Ganswindt, Ute; Belka, Claus; Parodi, Katia; Küfer, Karl-Heinz; Kurz, Christopher

    2017-01-01

    In intensity-modulated radiation therapy (IMRT), 3D in-room imaging data is typically utilized for accurate patient alignment on the basis of anatomical landmarks. In the presence of non-rigid anatomical changes, it is often not obvious which patient position is most suitable. Thus, dose-guided patient alignment is an interesting approach to use available in-room imaging data for up-to-date dose calculation, aimed at finding the position that yields the optimal dose distribution. This contribution presents the first implementation of dose-guided patient alignment as multi-criteria optimization problem. User-defined clinical objectives are employed for setting up a multi-objective problem. Using pre-calculated dose distributions at a limited number of patient shifts and dose interpolation, a continuous space of Pareto-efficient patient shifts becomes accessible. Pareto sliders facilitate interactive browsing of the possible shifts with real-time dose display to the user. Dose interpolation accuracy is validated and the potential of multi-objective dose-guided positioning demonstrated for three head and neck (H&N) and three prostate cancer patients. Dose-guided positioning is compared to replanning for all cases. A delineated replanning CT served as surrogate for in-room imaging data. Dose interpolation accuracy was high. Using a 2 % dose difference criterion, a median pass-rate of 95.7% for H&N and 99.6% for prostate cases was determined in a comparison to exact dose calculations. For all patients, dose-guided positioning allowed to find a clinically preferable dose distribution compared to bony anatomy based alignment. For all H&N cases, mean dose to the spared parotid glands was below 26~\\text{Gy} (up to 27.5~\\text{Gy} with bony alignment) and clinical target volume (CTV) {{V}95 % } above 99.1% (compared to 95.1%). For all prostate patients, CTV {{V}95 % } was above 98.9% (compared to 88.5%) and {{V}50~\\text{Gy}} to the rectum below 50 % (compared to 56.1%). Replanning yielded improved results for the H&N cases. For the prostate cases, differences to dose-guided positioning were minor.

  20. Accelerating parallel transmit array B1 mapping in high field MRI with slice undersampling and interpolation by kriging.

    PubMed

    Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans

    2014-08-01

    Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.

  1. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  2. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    NASA Astrophysics Data System (ADS)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly correlated auxiliary variables does not improve the quality of the spatial model. The effects of introduction of certain variables into the model were not climatologically justified and were seen on maps as unexpected and undesired artefacts. The results confirm, in accordance with previous studies, that in the case of air temperature distribution, the spatial process is non-stationary; thus, the local GWR model performs better than the global MLR if they are specified using the same set of auxiliary variables. If only GWR residuals are autocorrelated, the geographically weighted regression-kriging (GWRK) model seems to be optimal for air temperature spatial interpolation.

  3. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  4. The effect of administrative boundaries and geocoding error on cancer rates in California.

    PubMed

    Goldberg, Daniel W; Cockburn, Myles G

    2012-04-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Time-stable overset grid method for hyperbolic problems using summation-by-parts operators

    NASA Astrophysics Data System (ADS)

    Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.

    2018-05-01

    A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.

  6. Interpolate with DIVA and view the products in OceanBrowser : what's up ?

    NASA Astrophysics Data System (ADS)

    Watelet, Sylvain; Barth, Alexander; Beckers, Jean-Marie; Troupin, Charles

    2017-04-01

    The Data-Interpolating Variational Analysis (DIVA) software is a statistical tool designed to reconstruct a continuous field from discrete measurements. This method is based on the numerical implementation of the Variational Inverse Model (VIM), which consists of a minimization of a cost function, allowing the choice of the analyzed field fitting at best the data sets without presenting unrealistic strong variations. The problem is solved efficiently using a finite-element method. This method, equivalent to the Optimal Interpolation, is particularly suited to deal with irregularly-spaced observations and produces outputs on a regular grid (2D, 3D or 4D). The results are stored in NetCDF files, the most widespread format in the earth sciences community. OceanBrowser is a web-service that allows one to visualize gridded fields on-line. Within the SeaDataNet and EMODNET (Chemical lot) projects, several national ocean data centers have created gridded climatologies of different ocean properties using the data analysis software DIVA. In order to give a common viewing service to those interpolated products, the GHER has developed OceanBrowser which is based on open standards from the Open Geospatial Consortium (OGC), in particular Web Map Service (WMS) and Web Feature Service (WFS). These standards define a protocol for describing, requesting and querying two-dimensional maps at a given depth and time. DIVA and OceanBrowser are both softwares tools which are continuously upgraded and distributed for free through frequent version releases. The development is funded by the EMODnet and SeaDataNet projects and include many discussions and feedback from the users community. Here, we present two recent major upgrades. First, we have implemented a "customization" of DIVA analyses following the sea bottom, using the bottom depth gradient as a new source of information. The weaker the slope of the bottom ocean, the higher the correlation length. This correlation length being associated with the propagation of the information, it is therefore harder to interpolate through bottom topographic "barriers" such as the continental slope and easier to do it in the perpendicular direction. Although realistic for most applications, this behaviour can always be disabled by the user. Second, we have added some combined products in OceanBrowser, covering all European seas at once. Based on the analyses performed by the other EMODnet partners using DIVA on five zones (Atlantic, North Sea, Baltic Sea, Black Sea, Mediterranean Sea), we have computed a single European product for five variables : ammonium, chlorophyll-a, dissolved oxygen concentration, phosphate and silicate. At boundaries, a smooth filter was used to remove possible discrepancies between regional analyses. Our European combined product is available for all seasons and several depths. This is the first step towards the use of a common reference field for all European seas when running DIVA.

  7. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  8. Ionospheric gravity wave measurements with the USU dynasonde

    NASA Technical Reports Server (NTRS)

    Berkey, Frank T.; Deng, Jun Yuan

    1992-01-01

    A method for the measurement of ionospheric Gravity Wave (GW) using the USU Dynasonde is outlined. This method consists of a series of individual procedures, which includes functions for data acquisition, adaptive scaling, polarization discrimination, interpolation and extrapolation, digital filtering, windowing, spectrum analysis, GW detection, and graphics display. Concepts of system theory are applied to treat the ionosphere as a system. An adaptive ionogram scaling method was developed for automatically extracting ionogram echo traces from noisy raw sounding data. The method uses the well known Least Mean Square (LMS) algorithm to form a stochastic optimal estimate of the echo trace which is then used to control a moving window. The window tracks the echo trace, simultaneously eliminating the noise and interference. Experimental results show that the proposed method functions as designed. Case studies which extract GW from ionosonde measurements were carried out using the techniques described. Geophysically significant events were detected and the resultant processed results are illustrated graphically. This method was also developed for real time implementation in mind.

  9. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.

  10. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Biped Robot Gait Planning Based on 3D Linear Inverted Pendulum Model

    NASA Astrophysics Data System (ADS)

    Yu, Guochen; Zhang, Jiapeng; Bo, Wu

    2018-01-01

    In order to optimize the biped robot’s gait, the biped robot’s walking motion is simplify to the 3D linear inverted pendulum motion mode. The Center of Mass (CoM) locus is determined from the relationship between CoM and the Zero Moment Point (ZMP) locus. The ZMP locus is planned in advance. Then, the forward gait and lateral gait are simplified as connecting rod structure. Swing leg trajectory using B-spline interpolation. And the stability of the walking process is discussed in conjunction with the ZMP equation. Finally the system simulation is carried out under the given conditions to verify the validity of the proposed planning method.

  12. Spatial correlation of auroral zone geomagnetic variations

    NASA Astrophysics Data System (ADS)

    Jackel, B. J.; Davalos, A.

    2016-12-01

    Magnetic field perturbations in the auroral zone are produced by a combination of distant ionospheric and local ground induced currents. Spatial and temporal structure of these currents is scientifically interesting and can also have a significant influence on critical infrastructure.Ground-based magnetometer networks are an essential tool for studying these phenomena, with the existing complement of instruments in Canada providing extended local time coverage. In this study we examine the spatial correlation between magnetic field observations over a range of scale lengths. Principal component and canonical correlation analysis are used to quantify relationships between multiple sites. Results could be used to optimize network configurations, validate computational models, and improve methods for empirical interpolation.

  13. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  14. Edge directed image interpolation with Bamberger pyramids

    NASA Astrophysics Data System (ADS)

    Rosiles, Jose Gerardo

    2005-08-01

    Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.

  15. Optimization of Premix Powders for Tableting Use.

    PubMed

    Todo, Hiroaki; Sato, Kazuki; Takayama, Kozo; Sugibayashi, Kenji

    2018-05-08

    Direct compression is a popular choice as it provides the simplest way to prepare the tablet. It can be easily adopted when the active pharmaceutical ingredient (API) is unstable in water or to thermal drying. An optimal formulation of preliminary mixed powders (premix powders) is beneficial if prepared in advance for tableting use. The aim of this study was to find the optimal formulation of the premix powders composed of lactose (LAC), cornstarch (CS), and microcrystalline cellulose (MCC) by using statistical techniques. Based on the "Quality by Design" concept, a (3,3)-simplex lattice design consisting of three components, LAC, CS, and MCC was employed to prepare the model premix powders. Response surface method incorporating a thin-plate spline interpolation (RSM-S) was applied for estimation of the optimum premix powders for tableting use. The effect of tablet shape identified by the surface curvature on the optimization was investigated. The optimum premix powder was effective when the premix was applied to a small quantity of API, although the function of premix was limited in the case of the formulation of large amount of API. Statistical techniques are valuable to exploit new functions of well-known materials such as LAC, CS, and MCC.

  16. Nouvelle methode d'optimisation du cout d'un vol par l'utilisation d'un systeme de gestion de vol et sa validation sur un avion Lockheed L-1011 TriStar

    NASA Astrophysics Data System (ADS)

    Gagne, Jocelyn

    Usually, flights optimization and planning will take place before flight, on ground. However, it is not always feasible to do such optimization, or sometime unpredictable events may force pilots to change the flight path. In those circumstances, the pilots can only rely on charts or their Flight Management System (FMS) in order to maintain an economic flight. However, those FMS often rely on those same charts, which will not take into consideration different parameters, such as the cost index, the length on the flight or the weather. Even if some FMS take into consideration the weather, they may only rely on manually entered or limited data that could be outdated, insufficient or incomplete. The alleviate these problems, the function program's that was developed is mainly to determine the optimum flight profile for an aircraft, or more precisely, at the lowest overall cost, considering a take-off weight and weather conditions. The total cost is based on the value of time as well as the cost of fuel, resulting in the use of a ratio called the cost index. This index allows both to prioritize either the time or fuel consumption according to the costs related to a specific flight and/or airline. Thus, from a weight, the weather (wind, temperature, pressure), and the cost index, the program will calculate from the "Performance DataBase" (PDB) of a specific airplane an optimal flight profile over a given distance. The algorithm is based on linear interpolations in the performances tables using the Lagrange method. Moreover, in order to fully optimize the flight, the current program can, according to departure date and coordinates, download the latest available forecast from environment Canada website and calculate the optimum flight accordingly. The forecast data use by the program take the form of a 0.6 × 0.6 degrees grid in which the effects of wind, pressure and temperature are interpolated according to the aircraft geographical position and time. Using these tables, performances and forecasts, the program is therefore able to calculate the optimum profile from ground, but also in flight, if any change would occur on the path. Because all data is tabulated and not calculated, the required calculation power remains low, resulting in a short calculation time. Keywords: optimization, algorithm, simulation, cost.

  17. Application of spatial methods to identify areas with lime requirement in eastern Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Kisic, Ivica; Mesic, Milan; Zgorelec, Zeljka; Percin, Aleksandra; Pereira, Paulo

    2016-04-01

    With more than 50% of acid soils in all agricultural land in Croatia, soil acidity is recognized as a big problem. Low soil pH leads to a series of negative phenomena in plant production and therefore as a compulsory measure for reclamation of acid soils is liming, recommended on the base of soil analysis. The need for liming is often erroneously determined only on the basis of the soil pH, because the determination of cation exchange capacity, the hydrolytic acidity and base saturation is a major cost to producers. Therefore, in Croatia, as well as some other countries, the amount of liming material needed to ameliorate acid soils is calculated by considering their hydrolytic acidity. For this research, several interpolation methods were tested to identify the best spatial predictor of hidrolitic acidity. The purpose of this study was to: test several interpolation methods to identify the best spatial predictor of hidrolitic acidity; and to determine the possibility of using multivariate geostatistics in order to reduce the number of needed samples for determination the hydrolytic acidity, all with an aim that the accuracy of the spatial distribution of liming requirement is not significantly reduced. Soil pH (in KCl) and hydrolytic acidity (Y1) is determined in the 1004 samples (from 0-30 cm) randomized collected in agricultural fields near Orahovica in eastern Croatia. This study tested 14 univariate interpolation models (part of ArcGIS software package) in order to provide most accurate spatial map of hydrolytic acidity on a base of: all samples (Y1 100%), and the datasets with 15% (Y1 85%), 30% (Y1 70%) and 50% fewer samples (Y1 50%). Parallel to univariate interpolation methods, the precision of the spatial distribution of the Y1 was tested by the co-kriging method with exchangeable acidity (pH in KCl) as a covariate. The soils at studied area had an average pH (KCl) 4,81, while the average Y1 10,52 cmol+ kg-1. These data suggest that liming is necessary agrotechnical measure for soil conditioning. The results show that ordinary kriging was most accurate univariate interpolation method with smallest error (RMSE) in all four data sets, while the least precise showed Radial Basis Functions (Thin Plate Spline and Inverse Multiquadratic). Furthermore, it is noticeable a trend of increasing errors (RMSE) with a reduced number of samples tested on the most accurate univariate interpolation model: 3,096 (Y1 100%), 3,258 (Y1 85%), 3,317 (Y1 70%), 3,546 (Y1 50%). The best-fit semivariograms show a strong spatial dependence in Y1 100% (Nugget/Sill 20.19) and Y1 85% (Nugget/Sill 23.83), while a further reduction of the number of samples resulted with moderate spatial dependence (Y1 70% -35,85% and Y1 50% - 32,01). Co-kriging method resulted in a reduction in RMSE compared with univariate interpolation methods for each data set with: 2,054, 1,731 and 1,734 for Y1 85%, Y1 70%, Y1 50%, respectively. The results show the possibility for reducing sampling costs by using co-kriging method which is useful from the practical viewpoint. Reduced number of samples by half for determination of hydrolytic acidity in the interaction with the soil pH provides a higher precision for variable liming compared to the univariate interpolation methods of the entire set of data. These data provide new opportunities to reduce costs in the practical plant production in Croatia.

  18. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    PubMed

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology

  19. A multistage motion vector processing method for motion-compensated frame interpolation.

    PubMed

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  20. Signal-to-noise ratio estimation using adaptive tuning on the piecewise cubic Hermite interpolation model for images.

    PubMed

    Sim, K S; Yeap, Z X; Tso, C P

    2016-11-01

    An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  1. A New Ensemble Canonical Correlation Prediction Scheme for Seasonal Precipitation

    NASA Technical Reports Server (NTRS)

    Kim, Kyu-Myong; Lau, William K. M.; Li, Guilong; Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Department of Mathematical Sciences, University of Alberta, Edmonton, Canada This paper describes the fundamental theory of the ensemble canonical correlation (ECC) algorithm for the seasonal climate forecasting. The algorithm is a statistical regression sch eme based on maximal correlation between the predictor and predictand. The prediction error is estimated by a spectral method using the basis of empirical orthogonal functions. The ECC algorithm treats the predictors and predictands as continuous fields and is an improvement from the traditional canonical correlation prediction. The improvements include the use of area-factor, estimation of prediction error, and the optimal ensemble of multiple forecasts. The ECC is applied to the seasonal forecasting over various parts of the world. The example presented here is for the North America precipitation. The predictor is the sea surface temperature (SST) from different ocean basins. The Climate Prediction Center's reconstructed SST (1951-1999) is used as the predictor's historical data. The optimally interpolated global monthly precipitation is used as the predictand?s historical data. Our forecast experiments show that the ECC algorithm renders very high skill and the optimal ensemble is very important to the high value.

  2. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  3. Evaluation of rainfall structure on hydrograph simulation: Comparison of radar and interpolated methods, a study case in a tropical catchment

    NASA Astrophysics Data System (ADS)

    Velasquez, N.; Ochoa, A.; Castillo, S.; Hoyos Ortiz, C. D.

    2017-12-01

    The skill of river discharge simulation using hydrological models strongly depends on the quality and spatio-temporal representativeness of precipitation during storm events. All precipitation measurement strategies have their own strengths and weaknesses that translate into discharge simulation uncertainties. Distributed hydrological models are based on evolving rainfall fields in the same time scale as the hydrological simulation. In general, rainfall measurements from a dense and well maintained rain gauge network provide a very good estimation of the total volume for each rainfall event, however, the spatial structure relies on interpolation strategies introducing considerable uncertainty in the simulation process. On the other hand, rainfall retrievals from radar reflectivity achieve a better spatial structure representation but with higher uncertainty in the surface precipitation intensity and volume depending on the vertical rainfall characteristics and radar scan strategy. To assess the impact of both rainfall measurement methodologies on hydrological simulations, and in particular the effects of the rainfall spatio-temporal variability, a numerical modeling experiment is proposed including the use of a novel QPE (Quantitative Precipitation Estimation) method based on disdrometer data in order to estimate surface rainfall from radar reflectivity. The experiment is based on the simulation of 84 storms, the hydrological simulations are carried out using radar QPE and two different interpolation methods (IDW and TIN), and the assessment of simulated peak flow. Results show significant rainfall differences between radar QPE and the interpolated fields, evidencing a poor representation of storms in the interpolated fields, which tend to miss the precise location of the intense precipitation cores, and to artificially generate rainfall in some areas of the catchment. Regarding streamflow modelling, the potential improvement achieved by using radar QPE depends on the density of the rain gauge network and its distribution relative to the precipitation events. The results for the 84 storms show a better model skill using radar QPE than the interpolated fields. Results using interpolated fields are highly affected by the dominant rainfall type and the basin scale.

  4. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    PubMed Central

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626

  5. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen

    2011-08-15

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less

  6. Direct and Remote Effects of Topography and Orientation, and the Dynamics of Mesoscale Eddies

    DTIC Science & Technology

    2017-09-01

    Diagram for Visual Reference .............36  Figure 20.  GRB with 3-D 3300-meter and Quasi -Geostrophic Comparison ..............36  THIS PAGE INTENTIONALLY...circulation model NS Navier-Stokes equations Sopt Calculated Optimal Slope Sint Interpolated Optimal Slope Qf Thermal Heat Flux QG Quasi ...surveys such as MODE1 and POLYMODE, which was the largest joint U.S.–U.S.S.R. experiment of its time (Robinson 1983). Now, with the use of

  7. Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator

    NASA Astrophysics Data System (ADS)

    Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei

    2017-09-01

    This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.

  8. Sibsonian and non-Sibsonian natural neighbour interpolation of the total electron content value

    NASA Astrophysics Data System (ADS)

    Kotulak, Kacper; Froń, Adam; Krankowski, Andrzej; Pulido, German Olivares; Henrandez-Pajares, Manuel

    2017-03-01

    In radioastronomy the interferometric measurement between radiotelescopes located relatively close to each other helps removing ionospheric effects. Unfortunately, in case of networks such as LOw Frequency ARray (LOFAR), due to long baselines (currently up to 1500 km), interferometric methods fail to provide sufficiently accurate ionosphere delay corrections. Practically it means that systems such as LOFAR need external ionosphere information, coming from Global or Regional Ionospheric Maps (GIMs or RIMs, respectively). Thanks to the technology based on Global Navigation Satellite Systems (GNSS), the scientific community is provided with ionosphere sounding virtually worldwide. In this paper we compare several interpolation methods for RIMs computation based on scattered Vertical Total Electron Content measurements located on one thin ionospheric layer (Ionospheric Pierce Points—IPPs). The results of this work show that methods that take into account the topology of the data distribution (e.g., natural neighbour interpolation) perform better than those based on geometric computation only (e.g., distance-weighted methods).

  9. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    PubMed

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  10. Uncertainty of streamwater solute fluxes in five contrasting headwater catchments including model uncertainty and natural variability (Invited)

    NASA Astrophysics Data System (ADS)

    Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.

    2013-12-01

    There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)

  11. General MoM Solutions for Large Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B; Capolino, F; Wilton, D R

    2003-07-22

    This paper focuses on a numerical procedure that addresses the difficulties of dealing with large, finite arrays while preserving the generality and robustness of full-wave methods. We present a fast method based on approximating interactions between sufficiently separated array elements via a relatively coarse interpolation of the Green's function on a uniform grid commensurate with the array's periodicity. The interaction between the basis and testing functions is reduced to a three-stage process. The first stage is a projection of standard (e.g., RWG) subdomain bases onto a set of interpolation functions that interpolate the Green's function on the array face. Thismore » projection, which is used in a matrix/vector product for each array cell in an iterative solution process, need only be carried out once for a single cell and results in a low-rank matrix. An intermediate stage matrix/vector product computation involving the uniformly sampled Green's function is of convolutional form in the lateral (transverse) directions so that a 2D FFT may be used. The final stage is a third matrix/vector product computation involving a matrix resulting from projecting testing functions onto the Green's function interpolation functions; the low-rank matrix is either identical to (using Galerkin's method) or similar to that for the bases projection. An effective MoM solution scheme is developed for large arrays using a modification of the AIM (Adaptive Integral Method) method. The method permits the analysis of arrays with arbitrary contours and nonplanar elements. Both fill and solve times within the MoM method are improved with respect to more standard MoM solvers.« less

  12. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  13. Shape functions for velocity interpolation in general hexahedral cells

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.

    2002-01-01

    Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.

  14. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    NASA Astrophysics Data System (ADS)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  15. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  16. A Comparative Study of Three Spatial Interpolation Methodologies for the Analysis of Air Pollution Concentrations in Athens, Greece

    NASA Astrophysics Data System (ADS)

    Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios

    2010-01-01

    Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.

  17. Downscaling RCP8.5 daily temperatures and precipitation in Ontario using localized ensemble optimal interpolation (EnOI) and bias correction

    NASA Astrophysics Data System (ADS)

    Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping

    2017-10-01

    A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.

  18. High-Resolution DCE-MRI of the Pituitary Gland Using Radial k-Space Acquisition with Compressed Sensing Reconstruction.

    PubMed

    Rossi Espagnet, M C; Bangiyev, L; Haber, M; Block, K T; Babb, J; Ruggiero, V; Boada, F; Gonen, O; Fatterpekar, G M

    2015-08-01

    The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P < .005). Time-optimization analysis demonstrated that 120 seconds is ideal for dynamic pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. © 2015 by American Journal of Neuroradiology.

  19. Interpolated testing influences focused attention and improves integration of information during a video-recorded lecture.

    PubMed

    Jing, Helen G; Szpunar, Karl K; Schacter, Daniel L

    2016-09-01

    Although learning through a computer interface has become increasingly common, little is known about how to best structure video-recorded lectures to optimize learning. In 2 experiments, we examine changes in focused attention and the ability for students to integrate knowledge learned during a 40-min video-recorded lecture. In Experiment 1, we demonstrate that interpolating a lecture with memory tests (tested group), compared to studying the lecture material for the same amount of time (restudy group), improves overall learning and boosts integration of related information learned both within individual lecture segments and across the entire lecture. Although mind wandering rates between the tested and restudy groups did not differ, mind wandering was more detrimental for final test performance in the restudy group than in the tested group. In Experiment 2, we replicate the findings of Experiment 1, and additionally show that interpolated tests influence the types of thoughts that participants report during the lecture. While the tested group reported more lecture-related thoughts, the restudy group reported more lecture-unrelated thoughts; furthermore, lecture-related thoughts were positively related to final test performance, whereas lecture-unrelated thoughts were negatively related to final test performance. Implications for the use of interpolated testing in video-recorded lectures are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  1. Hybrid Quantum Mechanics/Molecular Mechanics Solvation Scheme for Computing Free Energies of Reactions at Metal-Water Interfaces.

    PubMed

    Faheem, Muhammad; Heyden, Andreas

    2014-08-12

    We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.

  2. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  3. UltraColor: a new gamut-mapping strategy

    NASA Astrophysics Data System (ADS)

    Spaulding, Kevin E.; Ellson, Richard N.; Sullivan, James R.

    1995-04-01

    Many color calibration and enhancement strategies exist for digital systems. Typically, these approaches are optimized to work well with one class of images, but may produce unsatisfactory results for other types of images. For example, a colorimetric strategy may work well when printing photographic scenes, but may give inferior results for business graphic images because of device color gamut limitations. On the other hand, a color enhancement strategy that works well for business graphics images may distort the color reproduction of skintones and other important photographic colors. This paper describes a method for specifying different color mapping strategies in various regions of color space, while providing a mechanism for smooth transitions between the different regions. The method involves a two step process: (1) constraints are applied so some subset of the points in the input color space explicitly specifying the color mapping function; (2) the color mapping for the remainder of the color values is then determined using an interpolation algorithm that preserves continuity and smoothness. The interpolation algorithm that was developed is based on a computer graphics morphing technique. This method was used to develop the UltraColor gamut mapping strategy, which combines a colorimetric mapping for colors with low saturation levels, with a color enhancement technique for colors with high saturation levels. The result is a single color transformation that produces superior quality for all classes of imagery. UltraColor has been incorporated in several models of Kodak printers including the Kodak ColorEase PS and the Kodak XLS 8600 PS thermal dye sublimation printers.

  4. Improved interpolation of meteorological forcings for hydrologic applications in a Swiss Alpine region

    NASA Astrophysics Data System (ADS)

    Tobin, Cara; Nicotina, Ludovico; Parlange, Marc B.; Berne, Alexis; Rinaldo, Andrea

    2011-04-01

    SummaryThis paper presents a comparative study on the mapping of temperature and precipitation fields in complex Alpine terrain. Its relevance hinges on the major impact that inadequate interpolations of meteorological forcings bear on the accuracy of hydrologic predictions regardless of the specifics of the models, particularly during flood events. Three flood events measured in the Swiss Alps are analyzed in detail to determine the interpolation methods which best capture the distribution of intense, orographically-induced precipitation. The interpolation techniques comparatively examined include: Inverse Distance Weighting (IDW), Ordinary Kriging (OK), and Kriging with External Drift (KED). Geostatistical methods rely on a robust anisotropic variogram for the definition of the spatial rainfall structure. Results indicate that IDW tends to significantly underestimate rainfall volumes whereas OK and KED methods capture spatial patterns and rainfall volumes induced by storm advection. Using numerical weather forecasts and elevation data as covariates for precipitation, we provide evidence for KED to outperform the other methods. Most significantly, the use of elevation as auxiliary information in KED of temperatures demonstrates minimal errors in estimated instantaneous rainfall volumes and provides instantaneous lapse rates which better capture snow/rainfall partitioning. Incorporation of the temperature and precipitation input fields into a hydrological model used for operational management was found to provide vastly improved outputs with respect to measured discharge volumes and flood peaks, with notable implications for flood modeling.

  5. [Downscaling research of spatial distribution of incidence of hand foot and mouth disease based on area-to-area Poisson Kriging method].

    PubMed

    Wang, J X; Hu, M G; Yu, S C; Xiao, G X

    2017-09-10

    Objective: To understand the spatial distribution of incidence of hand foot and mouth disease (HFMD) at scale of township and provide evidence for the better prevention and control of HFMD and allocation of medical resources. Methods: The incidence data of HFMD in 108 counties (district) in Shandong province in 2010 were collected. Downscaling interpolation was conducted by using area-to-area Poisson Kriging method. The interpolation results were visualized by using geographic information system (GIS). The county (district) incidence was interpolated into township incidence to get the distribution of spatial distribution of incidence of township. Results: In the downscaling interpolation, the range of the fitting semi-variance equation was 20.38 km. Within the range, the incidence had correlation with each other. The fitting function of scatter diagram of estimated and actual incidence of HFMD at country level was y =1.053 1 x , R (2)=0.99. The incidences at different scale were consistent. Conclusions: The incidence of HFMD had spatial autocorrelation within 20.38 km. When HFMD occurs in one place, it is necessary to strengthen the surveillance and allocation of medical resource in the surrounding area within 20.38 km. Area to area Poisson Kriging method based downscaling research can be used in spatial visualization of HFMD incidence.

  6. Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations

    NASA Astrophysics Data System (ADS)

    Loseille, A.; Dervieux, A.; Alauzet, F.

    2010-04-01

    This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.

  7. Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

  8. Nonlinear feedback control for high alpha flight

    NASA Technical Reports Server (NTRS)

    Stalford, Harold

    1990-01-01

    Analytical aerodynamic models are derived from a high alpha 6 DOF wind tunnel model. One detail model requires some interpolation between nonlinear functions of alpha. One analytical model requires no interpolation and as such is a completely continuous model. Flight path optimization is conducted on the basic maneuvers: half-loop, 90 degree pitch-up, and level turn. The optimal control analysis uses the derived analytical model in the equations of motion and is based on both moment and force equations. The maximum principle solution for the half-loop is poststall trajectory performing the half-loop in 13.6 seconds. The agility induced by thrust vectoring capability provided a minimum effect on reducing the maneuver time. By means of thrust vectoring control the 90 degrees pitch-up maneuver can be executed in a small place over a short time interval. The agility capability of thrust vectoring is quite beneficial for pitch-up maneuvers. The level turn results are based currently on only outer layer solutions of singular perturbation. Poststall solutions provide high turn rates but generate higher losses of energy than that of classical sustained solutions.

  9. Deciding Optimal Noise Monitoring Sites with Matrix Gray Absolute Relation Degree Theory

    NASA Astrophysics Data System (ADS)

    Gao, Zhihua; Li, Yadan; Zhao, Limin; Wang, Shuangwei

    2015-08-01

    Noise maps are applied to assess noise level in cities all around the world. There are mainly two ways of producing noise maps: one way is producing noise maps through theoretical simulations with the surrounding conditions, such as traffic flow, building distribution, etc.; the other one is calculating noise level with actual measurement data from noise monitors. Currently literature mainly focuses on considering more factors that affect sound traveling during theoretical simulations and interpolation methods in producing noise maps based on measurements of noise. Although many factors were considered during simulation, noise maps have to be calibrated by actual noise measurements. Therefore, the way of obtaining noise data is significant to both producing and calibrating a noise map. However, there is little literature mentioned about rules of deciding the right monitoring sites when placed the specified number of noise sensors and given the deviation of a noise map produced with data from them. In this work, by utilizing matrix Gray Absolute Relation Degree Theory, we calculated the relation degrees between the most precise noise surface and those interpolated with different combinations of noise data with specified number. We found that surfaces plotted with different combinations of noise data produced different relation degrees with the most precise one. Then we decided the least significant one among the total and calculated the corresponding deviation when it was excluded in making a noise surface. Processing the left noise data in the same way, we found out the least significant datum among the left data one by one. With this method, we optimized the noise sensor’s distribution in an area about 2km2. And we also calculated the bias of surfaces with the least significant data removed. Our practice provides an optimistic solution to the situation faced by most governments that there is limited financial budget available for noise monitoring, especially in the undeveloped regions.

  10. Hole filling with oriented sticks in ultrasound volume reconstruction

    PubMed Central

    Vaughan, Thomas; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2015-01-01

    Abstract. Volumes reconstructed from tracked planar ultrasound images often contain regions where no information was recorded. Existing interpolation methods introduce image artifacts and tend to be slow in filling large missing regions. Our goal was to develop a computationally efficient method that fills missing regions while adequately preserving image features. We use directional sticks to interpolate between pairs of known opposing voxels in nearby images. We tested our method on 30 volumetric ultrasound scans acquired from human subjects, and compared its performance to that of other published hole-filling methods. Reconstruction accuracy, fidelity, and time were improved compared with other methods. PMID:26839907

  11. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  12. Validation study of an interpolation method for calculating whole lung volumes and masses from reduced numbers of CT-images in ponies.

    PubMed

    Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A

    2014-12-01

    Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. How to design a cartographic continuum to help users to navigate between two topographic styles?

    NASA Astrophysics Data System (ADS)

    Ory, Jérémie; Touya, Guillaume; Hoarau, Charlotte; Christophe, Sidonie

    2018-05-01

    Geoportals and geovisualization tools provide to users various cartographic abstractions that describe differently a geographical space. Our purpose is to be able to design cartographic continuums, i.e. a set of in-between maps allowing users to navigate between two topographic styles. This paper addresses the problem of the interpolation between two topographic abstractions with different styles. We detail our approach in two steps. Firstly, we setup a comparison in order to identify which structural elements of a cartographic abstraction should be interpolated. Secondly, we propose an approach based on two design methods for maps interpolation.

  14. Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective

    PubMed Central

    Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan

    2015-01-01

    Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103

  15. Design Process for High Speed Civil Transport Aircraft Improved by Neural Network and Regression Methods

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.

    1998-01-01

    A key challenge in designing the new High Speed Civil Transport (HSCT) aircraft is determining a good match between the airframe and engine. Multidisciplinary design optimization can be used to solve the problem by adjusting parameters of both the engine and the airframe. Earlier, an example problem was presented of an HSCT aircraft with four mixed-flow turbofan engines and a baseline mission to carry 305 passengers 5000 nautical miles at a cruise speed of Mach 2.4. The problem was solved by coupling NASA Lewis Research Center's design optimization testbed (COMETBOARDS) with NASA Langley Research Center's Flight Optimization System (FLOPS). The computing time expended in solving the problem was substantial, and the instability of the FLOPS analyzer at certain design points caused difficulties. In an attempt to alleviate both of these limitations, we explored the use of two approximation concepts in the design optimization process. The two concepts, which are based on neural network and linear regression approximation, provide the reanalysis capability and design sensitivity analysis information required for the optimization process. The HSCT aircraft optimization problem was solved by using three alternate approaches; that is, the original FLOPS analyzer and two approximate (derived) analyzers. The approximate analyzers were calibrated and used in three different ranges of the design variables; narrow (interpolated), standard, and wide (extrapolated).

  16. Spatial Estimation of Sub-Hour Global Horizontal Irradiance Based on Official Observations and Remote Sensors

    PubMed Central

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-01-01

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102

  17. Spatial estimation of sub-hour Global Horizontal Irradiance based on official observations and remote sensors.

    PubMed

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-04-11

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).

  18. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    PubMed

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  19. Comparison of sEMG processing methods during whole-body vibration exercise.

    PubMed

    Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S

    2015-12-01

    The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Ab initio potential-energy surfaces for complex, multichannel systems using modified novelty sampling and feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.

    2005-02-01

    A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.

Top