Application of Design Methodologies for Feedback Compensation Associated with Linear Systems
NASA Technical Reports Server (NTRS)
Smith, Monty J.
1996-01-01
The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.
Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations
2008-02-01
Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.
2014-01-01
Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals
Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.
2016-01-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
NASA Technical Reports Server (NTRS)
Shkarayev, S.; Krashantisa, R.; Tessler, A.
2004-01-01
An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.
NASA Astrophysics Data System (ADS)
Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord
2017-04-01
This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, S.; Paschal, C.B.; Galloway, R.L.
Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less
Ehrhardt, J; Säring, D; Handels, H
2007-01-01
Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.
Ducru, Pablo; Josey, Colin; Dibert, Karia; ...
2017-01-25
This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less
NASA Astrophysics Data System (ADS)
Do, Seongju; Li, Haojun; Kang, Myungjoo
2017-06-01
In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.
Tri-linear interpolation-based cerebral white matter fiber imaging
Jiang, Shan; Zhang, Pengfei; Han, Tong; Liu, Weihua; Liu, Meixia
2013-01-01
Diffusion tensor imaging is a unique method to visualize white matter fibers three-dimensionally, non-invasively and in vivo, and therefore it is an important tool for observing and researching neural regeneration. Different diffusion tensor imaging-based fiber tracking methods have been already investigated, but making the computing faster, fiber tracking longer and smoother and the details shown clearer are needed to be improved for clinical applications. This study proposed a new fiber tracking strategy based on tri-linear interpolation. We selected a patient with acute infarction of the right basal ganglia and designed experiments based on either the tri-linear interpolation algorithm or tensorline algorithm. Fiber tracking in the same regions of interest (genu of the corpus callosum) was performed separately. The validity of the tri-linear interpolation algorithm was verified by quantitative analysis, and its feasibility in clinical diagnosis was confirmed by the contrast between tracking results and the disease condition of the patient as well as the actual brain anatomy. Statistical results showed that the maximum length and average length of the white matter fibers tracked by the tri-linear interpolation algorithm were significantly longer. The tracking images of the fibers indicated that this method can obtain smoother tracked fibers, more obvious orientation and clearer details. Tracking fiber abnormalities are in good agreement with the actual condition of patients, and tracking displayed fibers that passed though the corpus callosum, which was consistent with the anatomical structures of the brain. Therefore, the tri-linear interpolation algorithm can achieve a clear, anatomically correct and reliable tracking result. PMID:25206524
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
Interpolation problem for the solutions of linear elasticity equations based on monogenic functions
NASA Astrophysics Data System (ADS)
Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii
2017-11-01
Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.
Fast digital zooming system using directionally adaptive image interpolation and restoration.
Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki
2014-01-01
This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.
Image interpolation via regularized local linear regression.
Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang
2011-12-01
The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE
Computational aeroelasticity using a pressure-based solver
NASA Astrophysics Data System (ADS)
Kamakoti, Ramji
A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.
NASA Astrophysics Data System (ADS)
Yuval; Rimon, Y.; Graber, E. R.; Furman, A.
2013-07-01
A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli shoreline.
Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex
2014-08-01
A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli shoreline. The implications for aquifer management are discussed.
Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances
NASA Astrophysics Data System (ADS)
Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.
2018-04-01
Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.
Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark
2010-03-01
The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
NASA Astrophysics Data System (ADS)
Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios
2010-01-01
Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.
Traffic volume estimation using network interpolation techniques.
DOT National Transportation Integrated Search
2013-12-01
Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...
Quasi interpolation with Voronoi splines.
Mirzargar, Mahsa; Entezari, Alireza
2011-12-01
We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
NASA Technical Reports Server (NTRS)
Dupuis, L. R.; Scoggins, J. R.
1979-01-01
Results of analyses revealed that nonlinear changes or differences formed centers or systems, that were mesosynoptic in nature. These systems correlated well in space with upper level short waves, frontal zones, and radar observed convection, and were very systematic in time and space. Many of the centers of differences were well established in the vertical, extending up to the tropopause. Statistical analysis showed that on the average nonlinear changes were larger in convective areas than nonconvective regions. Errors often exceeding 100 percent were made by assuming variables to change linearly through a 12-h period in areas of thunderstorms, indicating that these nonlinear changes are important in the development of severe weather. Linear changes, however, accounted for more and more of an observed change as the time interval (within the 12-h interpolation period) increased, implying that the accuracy of linear interpolation increased over larger time intervals.
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis
NASA Astrophysics Data System (ADS)
Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.
2011-05-01
In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.
Design of interpolation functions for subpixel-accuracy stereo-vision systems.
Haller, Istvan; Nedevschi, Sergiu
2012-02-01
Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE
Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph
2014-04-01
Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.
The analysis of decimation and interpolation in the linear canonical transform domain.
Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li
2016-01-01
Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
NASA Astrophysics Data System (ADS)
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
NASA Astrophysics Data System (ADS)
Peters, Andre; Nehls, Thomas; Wessolek, Gerd
2016-06-01
Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.
Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.
de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2008-01-01
The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).
Spatiotemporal Interpolation Methods for Solar Event Trajectories
NASA Astrophysics Data System (ADS)
Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe
2018-05-01
This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.
NASA Astrophysics Data System (ADS)
Cuzzone, Joshua K.; Morlighem, Mathieu; Larour, Eric; Schlegel, Nicole; Seroussi, Helene
2018-05-01
Paleoclimate proxies are being used in conjunction with ice sheet modeling experiments to determine how the Greenland ice sheet responded to past changes, particularly during the last deglaciation. Although these comparisons have been a critical component in our understanding of the Greenland ice sheet sensitivity to past warming, they often rely on modeling experiments that favor minimizing computational expense over increased model physics. Over Paleoclimate timescales, simulating the thermal structure of the ice sheet has large implications on the modeled ice viscosity, which can feedback onto the basal sliding and ice flow. To accurately capture the thermal field, models often require a high number of vertical layers. This is not the case for the stress balance computation, however, where a high vertical resolution is not necessary. Consequently, since stress balance and thermal equations are generally performed on the same mesh, more time is spent on the stress balance computation than is otherwise necessary. For these reasons, running a higher-order ice sheet model (e.g., Blatter-Pattyn) over timescales equivalent to the paleoclimate record has not been possible without incurring a large computational expense. To mitigate this issue, we propose a method that can be implemented within ice sheet models, whereby the vertical interpolation along the z axis relies on higher-order polynomials, rather than the traditional linear interpolation. This method is tested within the Ice Sheet System Model (ISSM) using quadratic and cubic finite elements for the vertical interpolation on an idealized case and a realistic Greenland configuration. A transient experiment for the ice thickness evolution of a single-dome ice sheet demonstrates improved accuracy using the higher-order vertical interpolation compared to models using the linear vertical interpolation, despite having fewer degrees of freedom. This method is also shown to improve a model's ability to capture sharp thermal gradients in an ice sheet particularly close to the bed, when compared to models using a linear vertical interpolation. This is corroborated in a thermal steady-state simulation of the Greenland ice sheet using a higher-order model. In general, we find that using a higher-order vertical interpolation decreases the need for a high number of vertical layers, while dramatically reducing model runtime for transient simulations. Results indicate that when using a higher-order vertical interpolation, runtimes for a transient ice sheet relaxation are upwards of 5 to 7 times faster than using a model which has a linear vertical interpolation, and this thus requires a higher number of vertical layers to achieve a similar result in simulated ice volume, basal temperature, and ice divide thickness. The findings suggest that this method will allow higher-order models to be used in studies investigating ice sheet behavior over paleoclimate timescales at a fraction of the computational cost than would otherwise be needed for a model using a linear vertical interpolation.
Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin
2011-12-13
The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).
Creasy, Arch; Barker, Gregory; Carta, Giorgio
2017-03-01
A methodology is presented to predict protein elution behavior from an ion exchange column using both individual or combined pH and salt gradients based on high-throughput batch isotherm data. The buffer compositions are first optimized to generate linear pH gradients from pH 5.5 to 7 with defined concentrations of sodium chloride. Next, high-throughput batch isotherm data are collected for a monoclonal antibody on the cation exchange resin POROS XS over a range of protein concentrations, salt concentrations, and solution pH. Finally, a previously developed empirical interpolation (EI) method is extended to describe protein binding as a function of the protein and salt concentration and solution pH without using an explicit isotherm model. The interpolated isotherm data are then used with a lumped kinetic model to predict the protein elution behavior. Experimental results obtained for laboratory scale columns show excellent agreement with the predicted elution curves for both individual or combined pH and salt gradients at protein loads up to 45 mg/mL of column. Numerical studies show that the model predictions are robust as long as the isotherm data cover the range of mobile phase compositions where the protein actually elutes from the column. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Objective Interpolation of Scatterometer Winds
NASA Technical Reports Server (NTRS)
Tang, Wenquing; Liu, W. Timothy
1996-01-01
Global wind fields are produced by successive corrections that use measurements by the European Remote Sensing Satellite (ERS-1) scatterometer. The methodology is described. The wind fields at 10-meter height provided by the European Center for Medium-Range Weather Forecasting (ECMWF) are used to initialize the interpolation process. The interpolated wind field product ERSI is evaluated in terms of its improvement over the initial guess field (ECMWF) and the bin-averaged ERS-1 wind field (ERSB). Spatial and temporal differences between ERSI, ECMWF and ERSB are presented and discussed.
Restoring the missing features of the corrupted speech using linear interpolation methods
NASA Astrophysics Data System (ADS)
Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.
2017-10-01
One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.
NASA Astrophysics Data System (ADS)
Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier
2017-04-01
The availability and consistency of data is a determining factor for the reliability of any hydrological model and simulated results. Unfortunately, there are many regions worldwide where data is not available in the desired quantity and quality. The Santa River basin (SRB), located within a complex topographic and climatic setting in the tropical Andes of Peru is a clear example of this challenging situation. A monitoring network of in-situ stations in the SRB recorded series of hydro-meteorological variables which finally ceased to operate in 1999. In the following years, several researchers evaluated and completed many of these series. This database was used by multiple research and policy-oriented projects in the SRB. However, hydroclimatic information remains limited, making it difficult to perform research, especially when dealing with the assessment of current and future water resources. In this context, here the evaluation of different methodologies to interpolate temperature and precipitation data at a monthly time step as well as ice volume data in glacierized basins with limited data is presented. The methodologies were evaluated for the Quillcay River, a tributary of the SRB, where the hydro-meteorological data is available from nearby monitoring stations since 1983. The study period was 1983 - 1999 with a validation period among 1993 - 1999. For temperature series the aim was to extend the observed data and interpolate it. Data from Reanalysis NCEP was used to extend the observed series: 1) using a simple correlation with multiple field stations, or 2) applying the altitudinal correction proposed in previous studies. The interpolation then was applied as a function of altitude. Both methodologies provide very close results, by parsimony simple correlation is shown as a viable choice. For precipitation series, the aim was to interpolate observed data. Two methodologies were evaluated: 1) Inverse Distance Weighting whose results underestimate the amount of precipitation in high-altitudinal zones, and 2) ordinary Kriging (OK) whose variograms were calculated with the multi-annual monthly mean precipitation applying them to the whole study period. OK leads to better results in both low and high altitudinal zones. For ice volume, the aim was to estimate values from historical data: 1) with the GlabTop algorithm which needs digital elevation models, but these are available in an appropriate scale since 2009, 2) with a widely applied but controversially discussed glacier area-volume relation whose parameters were calibrated with results from the GlabTop model. Both methodologies provide reasonable results, but for historical data, the area-volume scaling only requires the glacial area easy to calculate from satellite images since 1986. In conclusion, the simple correlation, the OK and the calibrated relation for ice volume showed the best ways to interpolate glacio-climatic information. However, these methods must be carefully applied and revisited for the specific situation with high complexity. This is a first step in order to identify the most appropriate methods to interpolate and extend observed data in glacierized basins with limited information. New research should be done evaluating another methodologies and meteorological data in order to improve hydrological models and water management policies.
Effect of interpolation on parameters extracted from seating interface pressure arrays.
Wininger, Michael; Crane, Barbara
2014-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Registration-based interpolation applied to cardiac MRI
NASA Astrophysics Data System (ADS)
Ólafsdóttir, Hildur; Pedersen, Henrik; Hansen, Michael S.; Lyksborg, Mark; Hansen, Mads Fogtmann; Darkner, Sune; Larsen, Rasmus
2010-03-01
Various approaches have been proposed for segmentation of cardiac MRI. An accurate segmentation of the myocardium and ventricles is essential to determine parameters of interest for the function of the heart, such as the ejection fraction. One problem with MRI is the poor resolution in one dimension. A 3D registration algorithm will typically use a trilinear interpolation of intensities to determine the intensity of a deformed template image. Due to the poor resolution across slices, such linear approximation is highly inaccurate since the assumption of smooth underlying intensities is violated. Registration-based interpolation is based on 2D registrations between adjacent slices and is independent of segmentations. Hence, rather than assuming smoothness in intensity, the assumption is that the anatomy is consistent across slices. The basis for the proposed approach is the set of 2D registrations between each pair of slices, both ways. The intensity of a new slice is then weighted by (i) the deformation functions and (ii) the intensities in the warped images. Unlike the approach by Penney et al. 2004, this approach takes into account deformation both ways, which gives more robustness where correspondence between slices is poor. We demonstrate the approach on a toy example and on a set of cardiac CINE MRI. Qualitative inspection reveals that the proposed approach provides a more convincing transition between slices than images obtained by linear interpolation. A quantitative validation reveals significantly lower reconstruction errors than both linear and registration-based interpolation based on one-way registrations.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Three dimensional unstructured multigrid for the Euler equations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1991-01-01
The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.
Hybrid Fourier pseudospectral/discontinuous Galerkin time-domain method for wave propagation
NASA Astrophysics Data System (ADS)
Pagán Muñoz, Raúl; Hornikx, Maarten
2017-11-01
The Fourier Pseudospectral time-domain (Fourier PSTD) method was shown to be an efficient way of modelling acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent boundary conditions and predominantly staircase-like boundary shapes. This paper presents a hybrid approach to solve the LEE, coupling Fourier PSTD with a nodal Discontinuous Galerkin (DG) method. DG exhibits almost no restrictions with respect to geometrical complexity or boundary conditions. The aim of this novel method is to allow the computation of complex geometries and to be a step towards the implementation of frequency dependent boundary conditions by using the benefits of DG at the boundaries, while keeping the efficient Fourier PSTD in the bulk of the domain. The hybridization approach is based on conformal meshes to avoid spatial interpolation of the DG solutions when transferring values from DG to Fourier PSTD, while the data transfer from Fourier PSTD to DG is done utilizing spectral interpolation of the Fourier PSTD solutions. The accuracy of the hybrid approach is presented for one- and two-dimensional acoustic problems and the main sources of error are investigated. It is concluded that the hybrid methodology does not introduce significant errors compared to the Fourier PSTD stand-alone solver. An example of a cylinder scattering problem is presented and accurate results have been obtained when using the proposed approach. Finally, no instabilities were found during long-time calculation using the current hybrid methodology on a two-dimensional domain.
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok
2015-12-07
Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Restoring method for missing data of spatial structural stress monitoring based on correlation
NASA Astrophysics Data System (ADS)
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
Enhancement of panoramic image resolution based on swift interpolation of Bezier surface
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Yang, Guo-guang; Bai, Jian
2007-01-01
Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.
Interpolation algorithm for asynchronous ADC-data
NASA Astrophysics Data System (ADS)
Bramburger, Stefan; Zinke, Benny; Killat, Dirk
2017-09-01
This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.
Space-time interpolation of satellite winds in the tropics
NASA Astrophysics Data System (ADS)
Patoux, Jérôme; Levy, Gad
2013-09-01
A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.
Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft
NASA Technical Reports Server (NTRS)
Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter
2016-01-01
This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Bi-cubic interpolation for shift-free pan-sharpening
NASA Astrophysics Data System (ADS)
Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano
2013-12-01
Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.
Zarco-Perello, Salvador; Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ , except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use.
Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ, except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use. PMID:29204321
High-Speed Digital Scan Converter for High-Frequency Ultrasound Sector Scanners
Chang, Jin Ho; Yen, Jesse T.; Shung, K. Kirk
2008-01-01
This paper presents a high-speed digital scan converter (DSC) capable of providing more than 400 images per second, which is necessary to examine the activities of the mouse heart whose rate is 5–10 beats per second. To achieve the desired high-speed performance in cost-effective manner, the DSC developed adopts a linear interpolation algorithm in which two nearest samples to each object pixel of a monitor are selected and only angular interpolation is performed. Through computer simulation with the Field II program, its accuracy was investigated by comparing it to that of bilinear interpolation known as the best algorithm in terms of accuracy and processing speed. The simulation results show that the linear interpolation algorithm is capable of providing an acceptable image quality, which means that the difference of the root mean square error (RMSE) values of the linear and bilinear interpolation algorithms is below 1 %, if the sample rate of the envelope samples is at least four times higher than the Nyquist rate for the baseband component of echo signals. The designed DSC was implemented with a single FPGA (Stratix EP1S60F1020C6, Altera Corporation, San Jose, CA) on a DSC board that is a part of a high-speed ultrasound imaging system developed. The temporal and spatial resolutions of the implemented DSC were evaluated by examining its maximum processing time with a time stamp indicating when an image is completely formed and wire phantom testing, respectively. The experimental results show that the implemented DSC is capable of providing images at the rate of 400 images per second with negligible processing error. PMID:18430449
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Analysis and calibration of Safecasta data relative to the 2011 Fukushima Daiichi nuclear accident
NASA Astrophysics Data System (ADS)
Cervone, G.; Hultquist, C.
2017-12-01
Citizen-led movements producing scientific hazard data during disasters are increasingly common. After the Japanese earthquake-triggered tsunami in 2011, and the resulting radioactive releases at the damaged Fukushima Daiichi nuclear power plants, citizens monitored on-ground levels of radiation with innovative mobile devices built from off-the-shelf components. To date, the citizen-led Safecast project has recorded 50 million radiation measurements world- wide, with the majority of these measurements from Japan. A robust methodology is presented to calibrate contributed Safecast radiation measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using official observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the official and contributed datasets at specific time windows and at corresponding spatial locations. The coefficients found are aggregated and interpolated using cubic and linear methods to generate time dependent calibration function. Normal background radiation, decay rates and missing values are taken into account during the analysis. Results show that the official Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing ratio with time. The new time dependent calibration function takes into account the presence of different Cesium isotopes, and minimizes the error between official and contributed data. This time dependent Safecast calibration function is necessary until 2030, after which date the error caused by the isotopes ratio will become negligible.
Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.
Zhang, Hua; Sonke, Jan-Jakob
2013-01-01
Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.
Generalized Reduced Order Modeling of Aeroservoelastic Systems
NASA Astrophysics Data System (ADS)
Gariffo, James Michael
Transonic aeroelastic and aeroservoelastic (ASE) modeling presents a significant technical and computational challenge. Flow fields with a mixture of subsonic and supersonic flow, as well as moving shock waves, can only be captured through high-fidelity CFD analysis. With modern computing power, it is realtively straightforward to determine the flutter boundary for a single structural configuration at a single flight condition, but problems of larger scope remain quite costly. Some such problems include characterizing a vehicle's flutter boundary over its full flight envelope, optimizing its structural weight subject to aeroelastic constraints, and designing control laws for flutter suppression. For all of these applications, reduced-order models (ROMs) offer substantial computational savings. ROM techniques in general have existed for decades, and the methodology presented in this dissertation builds on successful previous techniques to create a powerful new scheme for modeling aeroelastic systems, and predicting and interpolating their transonic flutter boundaries. In this method, linear ASE state-space models are constructed from modal structural and actuator models coupled to state-space models of the linearized aerodynamic forces through feedback loops. Flutter predictions can be made from these models through simple eigenvalue analysis of their state-transition matrices for an appropriate set of dynamic pressures. Moreover, this analysis returns the frequency and damping trend of every aeroelastic branch. In contrast, determining the critical dynamic pressure by direct time-marching CFD requires a separate run for every dynamic pressure being analyzed simply to obtain the trend for the critical branch. The present ROM methodology also includes a new model interpolation technique that greatly enhances the benefits of these ROMs. This enables predictions of the dynamic behavior of the system for flight conditions where CFD analysis has not been explicitly performed, thus making it possible to characterize the overall flutter boundary with far fewer CFD runs. A major challenge of this research is that transonic flutter boundaries can involve multiple unstable modes of different types. Multiple ROM-based studies on the ONERA M6 wing are shown indicating that in addition to classic bending-torsion (BT) flutter modes. which become unstable above a threshold dynamic pressure after two natural modes become aerodynamically coupled, some natural modes are able to extract energy from the air and become unstable by themselves. These single-mode instabilities tend to be weaker than the BT instabilities, but have near-zero flutter boundaries (exactly zero in the absence of structural damping). Examples of hump modes, which behave like natural mode instabilities before stabilizing, are also shown, as are cases where multiple instabilities coexist at a single flight condition. The result of all these instabilities is a highly sensitive flutter boundary, where small changes in Mach number, structural stiffness, and structural damping can substantially alter not only the stability of individual aeroelastic branches, but also which branch is critical. Several studies are shown presenting how the flutter boundary varies with respect to all three of these parameters, as well as the number of structural modes used to construct the ROMs. Finally, an investigation of the effectiveness and limitations of the interpolation scheme is presented. It is found that in regions where the flutter boundary is relatively smooth, the interpolation method produces ROMs that predict the flutter characteristics of the corresponding directly computed models to a high degree of accuracy, even for relatively coarsely spaced data. On the other hand, in the transonic dip region, the interpolated ROMs show significant errors at points where the boundary changes rapidly; however, they still give a good qualitative estimate of where the largest jumps occur.
Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.
Prochazka, Ivan; Panek, Petr
2009-07-01
A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.
2008-03-01
investigated, as well as the methodology used . Chapter IV presents the data collection and analysis procedures, and the resulting analysis and...interpolate the data, although a non-interpolating model is possible. For this research Design and Analysis of Computer Experiments (DACE) is used ...followed by the analysis . 4.1. Testing Approach The initial SMOMADS algorithm used for this research was acquired directly from Walston [70]. The
Robotic magnetic mapping with the Kapvik planetary micro-rover
NASA Astrophysics Data System (ADS)
Hay, A.; Samson, C.
2018-07-01
Geomagnetic data gathering by micro-rovers is gaining momentum both for future planetary exploration missions and for terrestrial applications in extreme environments. This paper presents research into the integration of a planetary micro-rover with a potassium total-field magnetometer. The 40 kg Kapvik micro-rover is an ideal platform due to an aluminium construction and a rocker-bogie mobility system, which provides good manoeuvrability and terrainability. A light-weight GSMP 35U (uninhabited aerial vehicle) magnetometer, comprised of a 0.65 kg sensor and 0.63 kg electronics module, was mounted to the chassis via a custom 1.21 m composite boom. The boom dimensions were optimized to be an effective compromise between noise mitigation and mechanical practicality. An analysis using the fourth difference method was performed estimating the magnetic noise envelope at +/-0.03 nT at 10 Hz sampling frequency from the integrated systems during robotic operations. A robotic magnetic survey captured the total magnetic intensity along three parallel 40 m long lines and a perpendicular 15 m long tie line over the course of 3.75 h. The total magnetic intensity data were corrected for diurnal variations, levelled by linear interpolation of tie-line intersection points, corrected for a regional gradient, and then interpolated using Delaunay triangulation to lead a residual magnetic intensity map. This map exhibited an anomalous linear feature corresponding to a magnetic dipole 650 nT in amplitude. This feature coincides with a storm sewer buried approximately 2 m in the subsurface. This work provides benchmark methodologies and data to guide future integration of magnetometers on board planetary micro-rovers.
Recovery of sparse translation-invariant signals with continuous basis pursuit
Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero
2013-01-01
We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562
VizieR Online Data Catalog: New atmospheric parameters of MILES cool stars (Sharma+, 2016)
NASA Astrophysics Data System (ADS)
Sharma, K.; Prugniel, P.; Singh, H. P.
2015-11-01
MILES V2 spectral interpolator The FITS file is an improved version of MILES interpolator previously presented in PVK. It contains the coefficients of the interpolator, which allows one to compute an interpolated spectrum, giving an effective temperature, log of surface gravity and metallicity (Teff, logg, and [Fe/H]). The file consists of three extensions containing the three temperature regimes described in the paper. Extension Teff range 0 warm 4000-9000K 1 hot >7000K 2 cold <4550K The three functions are linearly interpolated in the Teff overlapping regions. Each extension contains a 2D image-type array, whose first axis is the wavelength described by a WCS (Air wavelength, starting at 3536Å, step=0.9Å). This FITS file can be used by the ULySS v1.3 or higher. (5 data files).
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
NASA Astrophysics Data System (ADS)
Hittmeir, Sabine; Philipp, Anne; Seibert, Petra
2017-04-01
In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.
LPV Controller Interpolation for Improved Gain-Scheduling Control Performance
NASA Technical Reports Server (NTRS)
Wu, Fen; Kim, SungWan
2002-01-01
In this paper, a new gain-scheduling control design approach is proposed by combining LPV (linear parameter-varying) control theory with interpolation techniques. The improvement of gain-scheduled controllers can be achieved from local synthesis of Lyapunov functions and continuous construction of a global Lyapunov function by interpolation. It has been shown that this combined LPV control design scheme is capable of improving closed-loop performance derived from local performance improvement. The gain of the LPV controller will also change continuously across parameter space. The advantages of the newly proposed LPV control is demonstrated through a detailed AMB controller design example.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
$X(3873$ and $Y(4140)$ using diquark-antidiquark operators with lattice QCD
Padmanath, M.; Lang, C. B.; Prelovsek Komelj, Sasa
2015-08-01
We perform a lattice study of charmonium-like mesons withmore » $$J^{PC}=1^{++}$$ and three quark contents $$\\bar cc \\bar du$$, $$\\bar cc(\\bar uu+\\bar dd)$$ and $$\\bar cc \\bar ss$$, where the later two can mix with $$\\bar cc$$. This simulation with $$N_f=2$$ and $$m_\\pi=266$$ MeV aims at the possible signatures of four-quark exotic states. We utilize a large basis of $$\\bar cc$$, two-meson and diquark-antidiquark interpolating fields, with diquarks in both anti-triplet and sextet color representations. A lattice candidate for X(3872) with I=0 is observed very close to the experimental state only if both $$\\bar cc$$ and $$D\\bar D^*$$ interpolators are included; the candidate is not found if diquark-antidiquark and $$D\\bar D^*$$ are used in the absence of $$\\bar cc$$. No candidate for neutral or charged X(3872), or any other exotic candidates are found in the I=1 channel. We also do not find signatures of exotic $$\\bar cc\\bar ss$$ candidates below 4.3 GeV, such as Y(4140). Possible physics and methodology related reasons for that are discussed. Along the way, we present the diquark-antidiquark operators as linear combinations of the two-meson operators via the Fierz transformations.« less
NASA Astrophysics Data System (ADS)
Stankiewicz, Witold; Morzyński, Marek; Kotecki, Krzysztof; Noack, Bernd R.
2017-04-01
We present a low-dimensional Galerkin model with state-dependent modes capturing linear and nonlinear dynamics. Departure point is a direct numerical simulation of the three-dimensional incompressible flow around a sphere at Reynolds numbers 400. This solution starts near the unstable steady Navier-Stokes solution and converges to a periodic limit cycle. The investigated Galerkin models are based on the dynamic mode decomposition (DMD) and derive the dynamical system from first principles, the Navier-Stokes equations. A DMD model with training data from the initial linear transient fails to predict the limit cycle. Conversely, a model from limit-cycle data underpredicts the initial growth rate roughly by a factor 5. Key enablers for uniform accuracy throughout the transient are a continuous mode interpolation between both oscillatory fluctuations and the addition of a shift mode. This interpolated model is shown to capture both the transient growth of the oscillation and the limit cycle.
NASA Technical Reports Server (NTRS)
Li, Y.; Navon, I. M.; Courtier, P.; Gauthier, P.
1993-01-01
An adjoint model is developed for variational data assimilation using the 2D semi-Lagrangian semi-implicit (SLSI) shallow-water equation global model of Bates et al. with special attention being paid to the linearization of the interpolation routines. It is demonstrated that with larger time steps the limit of the validity of the tangent linear model will be curtailed due to the interpolations, especially in regions where sharp gradients in the interpolated variables coupled with strong advective wind occur, a synoptic situation common in the high latitudes. This effect is particularly evident near the pole in the Northern Hemisphere during the winter season. Variational data assimilation experiments of 'identical twin' type with observations available only at the end of the assimilation period perform well with this adjoint model. It is confirmed that the computational efficiency of the semi-Lagrangian scheme is preserved during the minimization process, related to the variational data assimilation procedure.
NASA Astrophysics Data System (ADS)
Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.
2018-03-01
The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
NASA Astrophysics Data System (ADS)
Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2015-03-01
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
A general method for generating bathymetric data for hydrodynamic computer models
Burau, J.R.; Cheng, R.T.
1989-01-01
To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fazio, A.; Henry, B.; Hood, D.
1966-01-01
Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
NASA Astrophysics Data System (ADS)
El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander
2016-04-01
There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002
Dynamic graphs, community detection, and Riemannian geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun
A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
Ellis, Robert J; Zhu, Bilei; Koenig, Julian; Thayer, Julian F; Wang, Ye
2015-09-01
As the literature on heart rate variability (HRV) continues to burgeon, so too do the challenges faced with comparing results across studies conducted under different recording conditions and analysis options. Two important methodological considerations are (1) what sampling frequency (SF) to use when digitizing the electrocardiogram (ECG), and (2) whether to interpolate an ECG to enhance the accuracy of R-peak detection. Although specific recommendations have been offered on both points, the evidence used to support them can be seen to possess a number of methodological limitations. The present study takes a new and careful look at how SF influences 24 widely used time- and frequency-domain measures of HRV through the use of a Monte Carlo-based analysis of false positive rates (FPRs) associated with two-sample tests on independent sets of healthy subjects. HRV values from the first sample were calculated at 1000 Hz, and HRV values from the second sample were calculated at progressively lower SFs (and either with or without R-peak interpolation). When R-peak interpolation was applied prior to HRV calculation, FPRs for all HRV measures remained very close to 0.05 (i.e. the theoretically expected value), even when the second sample had an SF well below 100 Hz. Without R-peak interpolation, all HRV measures held their expected FPR down to 125 Hz (and far lower, in the case of some measures). These results provide concrete insights into the statistical validity of comparing datasets obtained at (potentially) very different SFs; comparisons which are particularly relevant for the domains of meta-analysis and mobile health.
Pixel-Level Decorrelation and BiLinearly Interpolated Subpixel Sensitivity applied to WASP-29b
NASA Astrophysics Data System (ADS)
Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Deming, Drake
2017-10-01
Measured exoplanet transit and eclipse depths can vary significantly depending on the methodology used, especially at the low S/N levels in Spitzer eclipses. BiLinearly Interpolated Subpixel Sensitivity (BLISS) models a physical, spatial effect, which is independent of any astrophysical effects. Pixel-Level Decorrelation (PLD) uses the relative variations in pixels near the target to correct for flux variations due to telescope motion. PLD is being widely applied to all Spitzer data without a thorough understanding of its behavior. It is a mathematical method derived from a Taylor expansion, and many of its parameters do not have a physical basis. PLD also relies heavily on binning the data to remove short time-scale variations, which can artifically smooth the data. We applied both methods to 4 eclipse observations of WASP-29b, a Saturn-sized planet, which was observed twice with the 3.6 µm and twice with the 4.5 µm channels of Spitzer's IRAC in 2010, 2011 and 2014 (programs 60003, 70084, and 10054, respectively). We compare the resulting eclipse depths and midpoints from each model, assess each method's ability to remove correlated noise, and discuss how to choose or combine the best data analysis methods. We also refined the orbit from eclipse timings, detecting a significant nonzero eccentricity, and we used our Bayesian Atmospheric Radiative Transfer (BART) code to retrieve the planet's atmosphere, which is consistent with a blackbody. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Application of cokriging techniques for the estimation of hail size
NASA Astrophysics Data System (ADS)
Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier
2018-01-01
There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.
Using 3D dynamic cartography and hydrological modelling for linear streamflow mapping
NASA Astrophysics Data System (ADS)
Drogue, G.; Pfister, L.; Leviandier, T.; Humbert, J.; Hoffmann, L.; El Idrissi, A.; Iffly, J.-F.
2002-10-01
This paper presents a regionalization methodology and an original representation of the downstream variation of daily streamflow using a conceptual rainfall-runoff model (HRM) and the 3D visualization tools of the GIS ArcView. The regionalization of the parameters of the HRM model was obtained by fitting simultaneously the runoff series from five sub-basins of the Alzette river basin (Grand-Duchy of Luxembourg) according to the permeability of geological formations. After validating the transposability of the regional parameter values on five test basins, streamflow series were simulated with the model at ungauged sites in one medium size geologically contrasted test basin and interpolated assuming a linear increase of streamflow between modelling points. 3D spatio-temporal cartography of mean annual and high raw and specific discharges are illustrated. During a severe flooding, the propagation of the flood waves in the different parts of the stream network shows an important contribution of sub-basins lying on impervious geological formations (direct runoff) compared with those including permeable geological formations which have a more contrasted hydrological response. The effect of spatial variability of rainfall is clearly perceptible.
Umehara, Kensuke; Ota, Junko; Ishida, Takayuki
2017-10-18
In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Hindrikson, Maris; Remm, Jaanus; Männil, Peep; Ozolins, Janis; Tammeleht, Egle; Saarma, Urmas
2013-01-01
Spatial genetics is a relatively new field in wildlife and conservation biology that is becoming an essential tool for unravelling the complexities of animal population processes, and for designing effective strategies for conservation and management. Conceptual and methodological developments in this field are therefore critical. Here we present two novel methodological approaches that further the analytical possibilities of STRUCTURE and DResD. Using these approaches we analyse structure and migrations in a grey wolf (Canislupus) population in north-eastern Europe. We genotyped 16 microsatellite loci in 166 individuals sampled from the wolf population in Estonia and Latvia that has been under strong and continuous hunting pressure for decades. Our analysis demonstrated that this relatively small wolf population is represented by four genetic groups. We also used a novel methodological approach that uses linear interpolation to statistically test the spatial separation of genetic groups. The new method, which is capable of using program STRUCTURE output, can be applied widely in population genetics to reveal both core areas and areas of low significance for genetic groups. We also used a recently developed spatially explicit individual-based method DResD, and applied it for the first time to microsatellite data, revealing a migration corridor and barriers, and several contact zones.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
Ball-morph: definition, implementation, and comparative evaluation.
Whited, Brian; Rossignac, Jaroslaw Jarek
2011-06-01
We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.
The Interpolation Theory of Radial Basis Functions
NASA Astrophysics Data System (ADS)
Baxter, Brad
2010-06-01
In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.
Comparison of sEMG processing methods during whole-body vibration exercise.
Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S
2015-12-01
The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381
Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.
Time-stable overset grid method for hyperbolic problems using summation-by-parts operators
NASA Astrophysics Data System (ADS)
Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.
2018-05-01
A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.
NASA Astrophysics Data System (ADS)
Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.
2017-12-01
In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.
Horn’s Curve Estimation Through Multi-Dimensional Interpolation
2013-03-01
complex nature of human behavior has not yet been broached. This is not to say analysts play favorites in reaching conclusions, only that varied...Chapter III, Section 3.7. For now, it is sufficient to say underdetermined data presents technical challenges and all such datasets will be excluded from...database lookup table and then use the method of linear interpolation to instantaneously estimate the unknown points on an as-needed basis ( say from a user
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
NASA Astrophysics Data System (ADS)
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.
Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D
2011-08-01
The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.
2011-01-01
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen
2011-08-15
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less
Daily air temperature interpolated at high spatial resolution over a large mountainous region
Dodson, R.; Marks, D.
1997-01-01
Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-01
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-16
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, M; Baek, J
2016-06-15
Purpose: To investigate the slice direction dependent detectability in cone beam CT images with anatomical background. Methods: We generated 3D anatomical background images using breast anatomy model. To generate 3D breast anatomy, we filtered 3D Gaussian noise with a square root of 1/f{sup 3}, and then assigned the attenuation coefficient of glandular (0.8cm{sup −1}) and adipose (0.46 cm{sup −1}) tissues based on voxel values. Projections were acquired by forward projection, and quantum noise was added to the projection data. The projection data were reconstructed by FDK algorithm. We compared the detectability of a 3 mm spherical signal in the imagemore » reconstructed from four different backprojection Methods: Hanning weighted ramp filter with linear interpolation (RECON1), Hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON3), and ramp filter with Fourier interpolation (RECON4), respectively. We computed task SNR of the spherical signal in transverse and longitudinal planes using channelized Hotelling observer with Laguerre-Gauss channels. Results: Transverse plane has similar task SNR values for different backprojection methods, while longitudinal plane has a maximum task SNR value in RECON1. For all backprojection methods, longitudinal plane has higher task SNR than transverse plane. Conclusion: In this work, we investigated detectability for different slice direction in cone beam CT images with anatomical background. Longitudinal plane has a higher task SNR than transverse plane, and backprojection with hanning weighted ramp filter with linear interpolation method (i.e., RECON1) produced the highest task SNR among four different backprojection methods. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Programs(IITP-2015-R0346-15-1008) supervised by the IITP (Institute for Information & Communications Technology Promotion), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the MSIP (2015R1C1A1A01052268) and framework of international cooperation program managed by NRF (NRF-2015K2A1A2067635).« less
Compact cell-centered discretization stencils at fine-coarse block structured grid interfaces
NASA Astrophysics Data System (ADS)
Pletzer, Alexander; Jamroz, Ben; Crockett, Robert; Sides, Scott
2014-03-01
Different strategies for coupling fine-coarse grid patches are explored in the context of the adaptive mesh refinement (AMR) method. We show that applying linear interpolation to fill in the fine grid ghost values can produce a finite volume stencil of comparable accuracy to quadratic interpolation provided the cell volumes are adjusted. The volume of fine cells expands whereas the volume of neighboring coarse cells contracts. The amount by which the cells contract/expand depends on whether the interface is a face, an edge, or a corner. It is shown that quadratic or better interpolation is required when the conductivity is spatially varying, anisotropic, the refinement ratio is other than two, or when the fine-coarse interface is concave.
Chirp Scaling Algorithms for SAR Processing
NASA Technical Reports Server (NTRS)
Jin, M.; Cheng, T.; Chen, M.
1993-01-01
The chirp scaling SAR processing algorithm is both accurate and efficient. Successful implementation requires proper selection of the interval of output samples, which is a function of the chirp interval, signal sampling rate, and signal bandwidth. Analysis indicates that for both airborne and spaceborne SAR applications in the slant range domain a linear chirp scaling is sufficient. To perform nonlinear interpolation process such as to output ground range SAR images, one can use a nonlinear chirp scaling interpolator presented in this paper.
A new multigrid formulation for high order finite difference methods on summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Weinerfelt, Per; Nordström, Jan
2018-04-01
Multigrid schemes for high order finite difference methods on summation-by-parts form are studied by comparing the effect of different interpolation operators. By using the standard linear prolongation and restriction operators, the Galerkin condition leads to inaccurate coarse grid discretizations. In this paper, an alternative class of interpolation operators that bypass this issue and preserve the summation-by-parts property on each grid level is considered. Clear improvements of the convergence rate for relevant model problems are achieved.
ERIC Educational Resources Information Center
Smith, Michael
1990-01-01
Presents several examples of the iteration method using computer spreadsheets. Examples included are simple iterative sequences and the solution of equations using the Newton-Raphson formula, linear interpolation, and interval bisection. (YP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakanishi, Masakazu, E-mail: m.nakanishi@aist.go.jp
Responses of a superconducting quantum interference device (SQUID) are periodically dependent on magnetic flux coupling to its superconducting ring and the period is a flux quantum (Φ{sub o} = h/2e, where h and e, respectively, express Planck's constant and elementary charge). Using this periodicity, we had proposed a digital to analog converter using a SQUID (SQUID DAC) of first generation with linear current output, interval of which corresponded to Φ{sub o}. Modification for increasing dynamic range by interpolating within each interval is reported. Linearity of the interpolation was also based on the quantum periodicity. A SQUID DAC with dynamic rangemore » of about 1.4 × 10{sup 7} was created as a demonstration.« less
Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary
Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen
2017-01-01
Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook
1988-01-01
A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables were interpolated using complete quadratic shape functions and the pressure was interpolated using linear shape functions. For the two dimensional case, the pressure is defined on a triangular element which is contained inside the complete biquadratic element for velocity variables; and for the three dimensional case, the pressure is defined on a tetrahedral element which is again contained inside the complete tri-quadratic element. Thus the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow for Reynolds number of 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorable with those of the finite difference methods as well as experimental data available. A finite elememt computer program for incompressible, laminar flows is presented.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.
2011-01-01
Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029
Zhu, Mengchen; Salcudean, Septimiu E
2011-07-01
In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.
Zhang, Meiyan; Zheng, Yahong Rosa
2017-01-01
This paper investigates the task assignment and path planning problem for multiple AUVs in three dimensional (3D) underwater wireless sensor networks where nonholonomic motion constraints of underwater AUVs in 3D space are considered. The multi-target task assignment and path planning problem is modeled by the Multiple Traveling Sales Person (MTSP) problem and the Genetic Algorithm (GA) is used to solve the MTSP problem with Euclidean distance as the cost function and the Tour Hop Balance (THB) or Tour Length Balance (TLB) constraints as the stop criterion. The resulting tour sequences are mapped to 2D Dubins curves in the X−Y plane, and then interpolated linearly to obtain the Z coordinates. We demonstrate that the linear interpolation fails to achieve G1 continuity in the 3D Dubins path for multiple targets. Therefore, the interpolated 3D Dubins curves are checked against the AUV dynamics constraint and the ones satisfying the constraint are accepted to finalize the 3D Dubins curve selection. Simulation results demonstrate that the integration of the 3D Dubins curve with the MTSP model is successful and effective for solving the 3D target assignment and path planning problem. PMID:28696377
Cai, Wenyu; Zhang, Meiyan; Zheng, Yahong Rosa
2017-07-11
This paper investigates the task assignment and path planning problem for multiple AUVs in three dimensional (3D) underwater wireless sensor networks where nonholonomic motion constraints of underwater AUVs in 3D space are considered. The multi-target task assignment and path planning problem is modeled by the Multiple Traveling Sales Person (MTSP) problem and the Genetic Algorithm (GA) is used to solve the MTSP problem with Euclidean distance as the cost function and the Tour Hop Balance (THB) or Tour Length Balance (TLB) constraints as the stop criterion. The resulting tour sequences are mapped to 2D Dubins curves in the X - Y plane, and then interpolated linearly to obtain the Z coordinates. We demonstrate that the linear interpolation fails to achieve G 1 continuity in the 3D Dubins path for multiple targets. Therefore, the interpolated 3D Dubins curves are checked against the AUV dynamics constraint and the ones satisfying the constraint are accepted to finalize the 3D Dubins curve selection. Simulation results demonstrate that the integration of the 3D Dubins curve with the MTSP model is successful and effective for solving the 3D target assignment and path planning problem.
Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.
Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian
2009-04-01
Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
Sensor placement in nuclear reactors based on the generalized empirical interpolation method
NASA Astrophysics Data System (ADS)
Argaud, J.-P.; Bouriquet, B.; de Caso, F.; Gong, H.; Maday, Y.; Mula, O.
2018-06-01
In this paper, we apply the so-called generalized empirical interpolation method (GEIM) to address the problem of sensor placement in nuclear reactors. This task is challenging due to the accumulation of a number of difficulties like the complexity of the underlying physics and the constraints in the admissible sensor locations and their number. As a result, the placement, still today, strongly relies on the know-how and experience of engineers from different areas of expertise. The present methodology contributes to making this process become more systematic and, in turn, simplify and accelerate the procedure.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
Chang, Nai-Fu; Chiang, Cheng-Yi; Chen, Tung-Chien; Chen, Liang-Gee
2011-01-01
On-chip implementation of Hilbert-Huang transform (HHT) has great impact to analyze the non-linear and non-stationary biomedical signals on wearable or implantable sensors for the real-time applications. Cubic spline interpolation (CSI) consumes the most computation in HHT, and is the key component for the HHT processor. In tradition, CSI in HHT is usually performed after the collection of a large window of signals, and the long latency violates the realtime requirement of the applications. In this work, we propose to keep processing the incoming signals on-line with small and overlapped data windows without sacrificing the interpolation accuracy. 58% multiplication and 73% division of CSI are saved after the data reuse between the data windows.
Effects of empty bins on image upscaling in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2017-07-01
This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard
2014-06-01
The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.
Li, Longxiang; Gong, Jianhua; Zhou, Jieping
2014-01-01
Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197
Li, Longxiang; Gong, Jianhua; Zhou, Jieping
2014-01-01
Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
NASA Astrophysics Data System (ADS)
Prasetyo, S. Y. J.; Hartomo, K. D.
2018-01-01
The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some area might exhibit the rainfall value of 0. The representation of 0 in the spatial interpolation is mainly caused by the absence of rainfall data in the nearest sample point or too far distance that produces smaller weight.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
Interpolation by new B-splines on a four directional mesh of the plane
NASA Astrophysics Data System (ADS)
Nouisser, O.; Sbibih, D.
2004-01-01
In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
International Geomagnetic Reference Field: the third generation.
Peddie, N.W.
1982-01-01
In August 1981 the International Association of Geomagnetism and Aeronomy revised the International Geomagnetic Reference Field (IGRF). It is the second revision since the inception of the IGRF in 1968. The revision extends the earlier series of IGRF models from 1980 to 1985, introduces a new series of definitive models for 1965-1976, and defines a provisional reference field for 1975- 1980. The revision consists of: 1) a model of the main geomagnetic field at 1980.0, not continuous with the earlier series of IGRF models together with a forecast model of the secular variation of the main field during 1980-1985; 2) definitive models of the main field at 1965.0, 1970.0, and 1975.0, with linear interpolation of the model coefficients specified for intervening dates; and 3) a provisional reference field for 1975-1980, defined as the linear interpolation of the 1975 and 1980 main-field models.-from Author
Modeling of particle interactions in magnetorheological elastomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biller, A. M., E-mail: kam@icmm.ru; Stolbov, O. V., E-mail: oleg100@gmail.com; Raikher, Yu. L., E-mail: raikher@icmm.ru
2014-09-21
The interaction between two particles made of an isotropic linearly polarizable magnetic material and embedded in an elastomer matrix is studied. In this case, when an external field is imposed, the magnetic attraction of the particles, contrary to point dipoles, is almost wraparound. The exact solution of the magnetic problem in the linear polarization case, although existing, is not practical; to circumvent its use, an interpolation formula is proposed. One more interpolation expression is developed for the resistance of the elastic matrix to the field-induced particle displacements. Minimization of the total energy of the pair reveals its configurational bistability inmore » a certain field range. One of the possible equilibrium states corresponds to the particles dwelling at a distance, the other—to their collapse in a tight dimer. This mesoscopic bistability causes magnetomechanical hysteresis which has important implications for the macroscopic behavior of magnetorheological elastomers.« less
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
GEOSTATISTICAL INTERPOLATION OF CHEMICAL CONCENTRATION. (R825689C037)
Measurements of contaminant concentration at a hazardous waste site typically vary over many orders of magnitude and have highly skewed distributions. This work presents a practical methodology for the estimation of solute concentration contour maps and volume...
NASA Astrophysics Data System (ADS)
Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu
2009-03-01
A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.
Computation of three-dimensional nozzle-exhaust flow fields with the GIM code
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Anderson, P. G.
1978-01-01
A methodology is introduced for constructing numerical analogs of the partial differential equations of continuum mechanics. A general formulation is provided which permits classical finite element and many of the finite difference methods to be derived directly. The approach, termed the General Interpolants Method (GIM), can combined the best features of finite element and finite difference methods. A quasi-variational procedure is used to formulate the element equations, to introduce boundary conditions into the method and to provide a natural assembly sequence. A derivation is given in terms of general interpolation functions from this procedure. Example computations for transonic and supersonic flows in two and three dimensions are given to illustrate the utility of GIM. A three-dimensional nozzle-exhaust flow field is solved including interaction with the freestream and a coupled treatment of the shear layer. Potential applications of the GIM code to a variety of computational fluid dynamics problems is then discussed in terms of existing capability or by extension of the methodology.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
2004-02-11
the general circulation of the middle atmosphere, Philos. Trans. R. Soc. London, Ser. A, 323, 693–705. Anton , H. (2000), Elementary Linear Algebra ...Because the saturated radiances may depend slightly on tangent height as the limb path length decreases, a linear trend (described by parameters a and b...track days and interpolated onto the same limb-track orbits. The color bar scale for radiance variance is linear . (b) Digital elevations of northern
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
An operator calculus for surface and volume modeling
NASA Technical Reports Server (NTRS)
Gordon, W. J.
1984-01-01
The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.
Tensor scale: An analytic approach with efficient computation and applications☆
Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura
2015-01-01
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148
Goovaerts, P; Albuquerque, Teresa; Antunes, Margarida
2016-11-01
This paper describes a multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration, with an application to an abandoned sedimentary gold mining region in Portugal. The main challenge was the existence of only a dozen gold measurements confined to the grounds of the old gold mines, which precluded the application of traditional interpolation techniques, such as cokriging. The analysis could, however, capitalize on 376 stream sediment samples that were analyzed for twenty two elements. Gold (Au) was first predicted at all 376 locations using linear regression (R 2 =0.798) and four metals (Fe, As, Sn and W), which are known to be mostly associated with the local gold's paragenesis. One hundred realizations of the spatial distribution of gold content were generated using sequential indicator simulation and a soft indicator coding of regression estimates, to supplement the hard indicator coding of gold measurements. Each simulated map then underwent a local cluster analysis to identify significant aggregates of low or high values. The one hundred classified maps were processed to derive the most likely classification of each simulated node and the associated probability of occurrence. Examining the distribution of the hot-spots and cold-spots reveals a clear enrichment in Au along the Erges River downstream from the old sedimentary mineralization.
NASA Technical Reports Server (NTRS)
Schanzer, Dena; Staenz, Karl
1992-01-01
An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data set acquired over Canal Flats, B.C., on 14 Aug. 1990, was used for the purpose of developing methodologies for surface reflectance retrieval using the 5S atmospheric code. A scene of Rogers Dry Lake, California (23 Jul. 1990), acquired within three weeks of the Canal Flats scene, was used as a potential reference for radiometric calibration purposes and for comparison with other studies using primarily LOWTRAN7. Previous attempts at surface reflectance retrieval indicated that reflectance values in the gaseous absorption bands had the poorest accuracy. Modifications to 5S to use 1 nm step size, in order to make fuller use of the 20 cm(sup -1) resolution of the gaseous absorption data, resulted in some improvement in the accuracy of the retrieved surface reflectance. Estimates of precipitable water vapor using non-linear least squares regression and simple ratioing techniques such as the CIBR (Continuum Interpolated Band Ratio) technique or the narrow/wide technique, which relate ratios of combinations of bands to precipitable water vapor through calibration curves, were found to vary widely. The estimates depended on the bands used for the estimation; none provided entirely satisfactory surface reflectance curves.
Giansanti, Daniele
2008-07-01
A wearable device for skin-contact thermography [Giansanti D, Maccioni G. Development and testing of a wearable integrated thermometer sensor for skin contact thermography. Med Eng Phys 2006 [ahead of print
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Development of a North American paleoclimate pollen-based reconstruction database application
NASA Astrophysics Data System (ADS)
Ladd, Matthew; Mosher, Steven; Viau, Andre
2013-04-01
Recent efforts in synthesizing paleoclimate records across the globe has warranted an effort to standardize the different paleoclimate archives currently available in order to facilitate data-model comparisons and hence improve our estimates of future climate change. It is often the case that the methodology and programs make it challenging for other researchers to reproduce the results for a reconstruction, therefore there is a need for to standardize paleoclimate reconstruction databases in an application specific to proxy data. Here we present a methodology using the open source R language using North American pollen databases (e.g. NAPD, NEOTOMA) where this application can easily be used to perform new reconstructions and quickly analyze and output/plot the data. The application was developed to easily test methodological and spatial/temporal issues that might affect the reconstruction results. The application allows users to spend more time analyzing and interpreting results instead of on data management and processing. Some of the unique features of this R program are the two modules each with a menu making the user feel at ease with the program, the ability to use different pollen sums, select one of 70 climate variables available, substitute an appropriate modern climate dataset, a user-friendly regional target domain, temporal resolution criteria, linear interpolation and many other features for a thorough exploratory data analysis. The application program will be available for North American pollen-based reconstructions and eventually be made available as a package through the CRAN repository by late 2013.
Evaluation of gridding procedures for air temperature over Southern Africa
NASA Astrophysics Data System (ADS)
Eiselt, Kai-Uwe; Kaspar, Frank; Mölg, Thomas; Krähenmann, Stefan; Posada, Rafael; Riede, Jens O.
2017-06-01
Africa is considered to be highly vulnerable to climate change, yet the availability of observational data and derived products is limited. As one element of the SASSCAL initiative (Southern African Science Service Centre for Climate Change and Adaptive Land Management), a cooperation of Angola, Botswana, Namibia, Zambia, South Africa and Germany, networks of automatic weather stations have been installed or improved (http://www.sasscalweathernet.org). The increased availability of meteorological observations improves the quality of gridded products for the region. Here we compare interpolation methods for monthly minimum and maximum temperatures which were calculated from hourly measurements. Due to a lack of longterm records we focused on data ranging from September 2014 to August 2016. The best interpolation results have been achieved combining multiple linear regression (elevation, a continentality index and latitude as predictors) with three dimensional inverse distance weighted interpolation.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Tomography for two-dimensional gas temperature distribution based on TDLAS
NASA Astrophysics Data System (ADS)
Luo, Can; Wang, Yunchu; Xing, Fei
2018-03-01
Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.
NASA Astrophysics Data System (ADS)
Kim, Juhye; Nam, Haewon; Lee, Rena
2015-07-01
CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat
2013-07-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Šiljeg, A.; Lozić, S.; Šiljeg, S.
2014-12-01
The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.
Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R
2018-02-01
High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Accuracy of stream habitat interpolations across spatial scales
Sheehan, Kenneth R.; Welsh, Stuart A.
2013-01-01
Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.
NASA Astrophysics Data System (ADS)
Guglielmino, F.; Nunnari, G.; Puglisi, G.; Spata, A.
2009-04-01
We propose a new technique, based on the elastic theory, to efficiently produce an estimate of three-dimensional surface displacement maps by integrating sparse Global Position System (GPS) measurements of deformations and Differential Interferometric Synthetic Aperture Radar (DInSAR) maps of movements of the Earth's surface. The previous methodologies known in literature, for combining data from GPS and DInSAR surveys, require two steps: the first, in which sparse GPS measurements are interpolated in order to fill in GPS displacements at the DInSAR grid, and the second, to estimate the three-dimensional surface displacement maps by using a suitable optimization technique. One of the advantages of the proposed approach is that both these steps are unified. We propose a linear matrix equation which accounts for both GPS and DInSAR data whose solution provide simultaneously the strain tensor, the displacement field and the rigid body rotation tensor throughout the entire investigated area. The mentioned linear matrix equation is solved by using the Weighted Least Square (WLS) thus assuring both numerical robustness and high computation efficiency. The proposed methodology was tested on both synthetic and experimental data, these last from GPS and DInSAR measurements carried out on Mt. Etna. The goodness of the results has been evaluated by using standard errors. These tests also allow optimising the choice of specific parameters of this algorithm. This "open" structure of the method will allow in the near future to take account of other available data sets, such as additional interferograms, or other geodetic data (e.g. levelling, tilt, etc.), in order to achieve even higher accuracy.
A comparison of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Precup, Ovidiu V.; Iori, Giulia
2004-12-01
On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. There are two ways to deal with this problem. The time series can be homogenised through an interpolation method (An Introduction to High-Frequency Finance, Academic Press, NY, 2001) (linear or previous tick) and then the Pearson correlation statistic computed. Recently, methods that can handle raw non-synchronous time series have been developed (Int. J. Theor. Appl. Finance 6(1) (2003) 87; J. Empirical Finance 4 (1997) 259). This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Murman, S. M.; Berger, M. J.
2003-01-01
This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.
An Equation of State for Foamed Divinylbenzene (DVB) Based on Multi-Shock Response
NASA Astrophysics Data System (ADS)
Aslam, Tariq; Schroen, Diana; Gustavsen, Richard; Bartram, Brian
2013-06-01
The methodology for making foamed Divinylbenzene (DVB) is described. For a variety of initial densities, foamed DVB is examined through multi-shock compression and release experiments. Results from multi-shock experiments on LANL's 2-stage gas gun will be presented. A simple conservative Lagrangian numerical scheme, utilizing total-variation-diminishing interpolation and an approximate Riemann solver, will be presented as well as the methodology of calibration. It has been previously demonstrated that a single Mie-Gruneisen fitting form can replicate foam multi-shock compression response at a variety of initial densities; such a methodology will be presented for foamed DVB.
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1988-07-01
A new semiempirical method that significantly simplifies atomic mass systematics and which provides a method for making mass predictions by linear interpolation is discussed in the context of the nuclear valence space. In certain regions complicated patterns of mass systematics in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences.
Conical intersection seams in polyenes derived from their chemical composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nenov, Artur; Vivie-Riedle, Regina de
2012-08-21
The knowledge of conical intersection seams is important to predict and explain the outcome of ultrafast reactions in photochemistry and photobiology. They define the energetic low-lying reachable regions that allow for the ultrafast non-radiative transitions. In complex molecules it is not straightforward to locate them. We present a systematic approach to predict conical intersection seams in multifunctionalized polyenes and their sensitivity to substituent effects. Included are seams that facilitate the photoreaction of interest as well as seams that open competing loss channels. The method is based on the extended two-electron two-orbital method [A. Nenov and R. de Vivie-Riedle, J. Chem.more » Phys. 135, 034304 (2011)]. It allows to extract the low-lying regions for non-radiative transitions, which are then divided into small linear segments. Rules of thumb are introduced to find the support points for these segments, which are then used in a linear interpolation scheme for a first estimation of the intersection seams. Quantum chemical optimization of the linear interpolated structures yields the final energetic position. We demonstrate our method for the example of the electrocyclic isomerization of trifluoromethyl-pyrrolylfulgide.« less
Strategic and Tactical Technology and Methodologies for Electro-Optical Sensor Testing
1997-02-01
Performance of Ground Quartz and Flashed Opal 21... structural rigidity, inertial stability, optical fidelity, etc., should consider the later requirements for modeling and simulations which will...individual pixels. Figure 6 is the magnified section, and Fig. 7 illustrates the results of using fractal interpolation to artifi - cially increase the
ERIC Educational Resources Information Center
Palka, Sean
2015-01-01
This research details a methodology designed for creating content in support of various phishing prevention tasks including live exercises and detection algorithm research. Our system uses probabilistic context-free grammars (PCFG) and variable interpolation as part of a multi-pass method to create diverse and consistent phishing email content on…
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Multiple-degree-of-freedom vehicle
Borenstein, Johann
1995-01-01
A multi-degree-of-freedom vehicle employs a compliant linkage to accommodate the need for a variation in the distance between drive wheels or drive systems which are independently steerable and drivable. The subject vehicle is provided with rotary encodes to provide signals representative of the orientation of the steering pivot associated with each such drive wheel or system, and a linear encoder which issues a signal representative of the fluctuations in the distance between the drive elements. The wheels of the vehicle are steered and driven in response to the linear encoder signal, there being provided a controller system for minimizing the fluctuations in the distance. The controller system is a software implementation of a plurality of controllers, operating at the chassis level and at the vehicle level. A trajectory interpolator receives x-displacement, y-displacement, and .theta.-displacement signals and produces to the vehicle level controller trajectory signals corresponding to interpolated control signals. The x-displacement, y-displacement, and .theta.-displacement signals are received from a human operator, via a manipulable joy stick.
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
Tran, V H Huynh; Gilbert, H; David, I
2017-01-01
With the development of automatic self-feeders, repeated measurements of feed intake are becoming easier in an increasing number of species. However, the corresponding BW are not always recorded, and these missing values complicate the longitudinal analysis of the feed conversion ratio (FCR). Our aim was to evaluate the impact of missing BW data on estimations of the genetic parameters of FCR and ways to improve the estimations. On the basis of the missing BW profile in French Large White pigs (male pigs weighed weekly, females and castrated males weighed monthly), we compared 2 different ways of predicting missing BW, 1 using a Gompertz model and 1 using a linear interpolation. For the first part of the study, we used 17,398 weekly records of BW and feed intake recorded over 16 consecutive weeks in 1,222 growing male pigs. We performed a simulation study on this data set to mimic missing BW values according to the pattern of weekly proportions of incomplete BW data in females and castrated males. The FCR was then computed for each week using observed data (obser_FCR), data with missing BW (miss_FCR), data with BW predicted using a Gompertz model (Gomp_FCR), and data with BW predicted by linear interpolation (interp_FCR). Heritability (h) was estimated, and the EBV was predicted for each repeated FCR using a random regression model. In the second part of the study, the full data set (males with their complete BW records, castrated males and females with missing BW) was analyzed using the same methods (miss_FCR, Gomp_FCR, and interp_FCR). Results of the simulation study showed that h were overestimated in the case of missing BW and that predicting BW using a linear interpolation provided a more accurate estimation of h and of EBV than a Gompertz model. Over 100 simulations, the correlation between obser_EBV and interp_EBV, Gomp_EBV, and miss_EBV was 0.93 ± 0.02, 0.91 ± 0.01, and 0.79 ± 0.04, respectively. The heritabilities obtained with the full data set were quite similar for miss_FCR, Gomp_FCR, and interp_FCR. In conclusion, when the proportion of missing BW is high, genetic parameters of FCR are not well estimated. In French Large White pigs, in the growing period extending from d 65 to 168, prediction of missing BW using a Gompertz growth model slightly improved the estimations, but the linear interpolation improved the estimation to a greater extent. This result is due to the linear rather than sigmoidal increase in BW over the study period.
NASA Astrophysics Data System (ADS)
Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.
2012-03-01
°For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
NASA Astrophysics Data System (ADS)
Gorji, Taha; Sertel, Elif; Tanik, Aysegul
2017-12-01
Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.
On the design of decoupling controllers for advanced rotorcraft in the hover case
NASA Technical Reports Server (NTRS)
Fan, M. K. H.; Tits, A.; Barlow, J.; Tsing, N. K.; Tischler, M.; Takahashi, M.
1991-01-01
A methodology for design of helicopter control systems is proposed that can account for various types of concurrent specifications: stability, decoupling between longitudinal and lateral motions, handling qualities, and physical limitations of the swashplate motions. This is achieved by synergistic use of analytical techniques (Q-parameterization of all stabilizing controllers, transfer function interpolation) and advanced numerical optimization techniques. The methodology is used to design a controller for the UH-60 helicopter in hover. Good results are achieved for decoupling and handling quality specifications.
Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Excep...
Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V
2012-07-02
We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2017-12-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2018-01-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
NASA Astrophysics Data System (ADS)
Igumnov, Leonid; Ipatov, Aleksandr; Belov, Aleksandr; Petrov, Andrey
2015-09-01
The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary) and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.
Goovaerts, P.; Albuquerque, Teresa; Antunes, Margarida
2015-01-01
This paper describes a multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration, with an application to an abandoned sedimentary gold mining region in Portugal. The main challenge was the existence of only a dozen gold measurements confined to the grounds of the old gold mines, which precluded the application of traditional interpolation techniques, such as cokriging. The analysis could, however, capitalize on 376 stream sediment samples that were analyzed for twenty two elements. Gold (Au) was first predicted at all 376 locations using linear regression (R2=0.798) and four metals (Fe, As, Sn and W), which are known to be mostly associated with the local gold’s paragenesis. One hundred realizations of the spatial distribution of gold content were generated using sequential indicator simulation and a soft indicator coding of regression estimates, to supplement the hard indicator coding of gold measurements. Each simulated map then underwent a local cluster analysis to identify significant aggregates of low or high values. The one hundred classified maps were processed to derive the most likely classification of each simulated node and the associated probability of occurrence. Examining the distribution of the hot-spots and cold-spots reveals a clear enrichment in Au along the Erges River downstream from the old sedimentary mineralization. PMID:27777638
NASA Astrophysics Data System (ADS)
Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay
2014-05-01
In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the Marikina River, the local officials used this information and determined that the river would overflow in a few hours. It gave them a critical lead time to evacuate residents along the floodplain and no casualties were reported after the event.
High Maneuverability Airframe: Investigation of Fin and Canard Sizing for Optimum Maneuverability
2014-09-01
overset grids (unified- grid); 5) total variation diminishing discretization based on a new multidimensional interpolation framework; 6) Riemann solvers to...Aerodynamics .........................................................................................3 3.1.1 Solver ...describes the methodology used for the simulations. 3.1.1 Solver The double-precision solver of a commercially available code, CFD ++ v12.1.1, 9
40 CFR 86.094-16 - Prohibition of defeat devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Provisions for Emission Regulations for 1977 and Later Model Year New Light-Duty Vehicles, Light-Duty Trucks and Heavy-Duty Engines, and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled... congruity across the intermediate temperature range is the linear interpolation between the CO standard...
Advanced Machine Learning Emulators of Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.
2017-12-01
Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.
Ambient Ozone Exposure in Czech Forests: A GIS-Based Approach to Spatial Distribution Assessment
Hůnová, I.; Horálek, J.; Schreiberová, M.; Zapletal, M.
2012-01-01
Ambient ozone (O3) is an important phytotoxic pollutant, and detailed knowledge of its spatial distribution is becoming increasingly important. The aim of the paper is to compare different spatial interpolation techniques and to recommend the best approach for producing a reliable map for O3 with respect to its phytotoxic potential. For evaluation we used real-time ambient O3 concentrations measured by UV absorbance from 24 Czech rural sites in the 2007 and 2008 vegetation seasons. We considered eleven approaches for spatial interpolation used for the development of maps for mean vegetation season O3 concentrations and the AOT40F exposure index for forests. The uncertainty of maps was assessed by cross-validation analysis. The root mean square error (RMSE) of the map was used as a criterion. Our results indicate that the optimal interpolation approach is linear regression of O3 data and altitude with subsequent interpolation of its residuals by ordinary kriging. The relative uncertainty of the map of O3 mean for the vegetation season is less than 10%, using the optimal method as for both explored years, and this is a very acceptable value. In the case of AOT40F, however, the relative uncertainty of the map is notably worse, reaching nearly 20% in both examined years. PMID:22566757
NASA Astrophysics Data System (ADS)
Feng, Wenqiang; Guo, Zhenlin; Lowengrub, John S.; Wise, Steven M.
2018-01-01
We present a mass-conservative full approximation storage (FAS) multigrid solver for cell-centered finite difference methods on block-structured, locally cartesian grids. The algorithm is essentially a standard adaptive FAS (AFAS) scheme, but with a simple modification that comes in the form of a mass-conservative correction to the coarse-level force. This correction is facilitated by the creation of a zombie variable, analogous to a ghost variable, but defined on the coarse grid and lying under the fine grid refinement patch. We show that a number of different types of fine-level ghost cell interpolation strategies could be used in our framework, including low-order linear interpolation. In our approach, the smoother, prolongation, and restriction operations need never be aware of the mass conservation conditions at the coarse-fine interface. To maintain global mass conservation, we need only modify the usual FAS algorithm by correcting the coarse-level force function at points adjacent to the coarse-fine interface. We demonstrate through simulations that the solver converges geometrically, at a rate that is h-independent, and we show the generality of the solver, applying it to several nonlinear, time-dependent, and multi-dimensional problems. In several tests, we show that second-order asymptotic (h → 0) convergence is observed for the discretizations, provided that (1) at least linear interpolation of the ghost variables is employed, and (2) the mass conservation corrections are applied to the coarse-level force term.
Higher-dimensional Wannier functions of multiparameter Hamiltonians
NASA Astrophysics Data System (ADS)
Hanke, Jan-Philipp; Freimuth, Frank; Blügel, Stefan; Mokrousov, Yuriy
2015-05-01
When using Wannier functions to study the electronic structure of multiparameter Hamiltonians H(k ,λ ) carrying a dependence on crystal momentum k and an additional periodic parameter λ , one usually constructs several sets of Wannier functions for a set of values of λ . We present the concept of higher-dimensional Wannier functions (HDWFs), which provide a minimal and accurate description of the electronic structure of multiparameter Hamiltonians based on a single set of HDWFs. The obstacle of nonorthogonality of Bloch functions at different λ is overcome by introducing an auxiliary real space, which is reciprocal to the parameter λ . We derive a generalized interpolation scheme and emphasize the essential conceptual and computational simplifications in using the formalism, for instance, in the evaluation of linear response coefficients. We further implement the necessary machinery to construct HDWFs from ab initio within the full potential linearized augmented plane-wave method (FLAPW). We apply our implementation to accurately interpolate the Hamiltonian of a one-dimensional magnetic chain of Mn atoms in two important cases of λ : (i) the spin-spiral vector q and (ii) the direction of the ferromagnetic magnetization m ̂. Using the generalized interpolation of the energy, we extract the corresponding values of magnetocrystalline anisotropy energy, Heisenberg exchange constants, and spin stiffness, which compare very well with the values obtained from direct first principles calculations. For toy models we demonstrate that the method of HDWFs can also be used in applications such as the virtual crystal approximation, ferroelectric polarization, and spin torques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, Stuart R.
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Enhanced Night Vision Via a Combination of Poisson Interpolation and Machine Learning
2006-02-01
of 0-255, they are mostly similar. The right plot shows a family of m(x, ψ) curves of ψ=2 (the most linear) through ψ=1024 (the most curved ...complicating low-light imaging. Nayar and Branzoi [04] later suggested a second variant using a DLP micromirror array to modulate the exposure, via time...255, they are mostly similar. The right plot shows a family of m(x, ψ) curves of ψ=2 (the most linear) through ψ=1024 (the most curved
Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation
NASA Astrophysics Data System (ADS)
Skrzypacz, Piotr
2017-09-01
The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.
NASA Astrophysics Data System (ADS)
Velasquez, N.; Ochoa, A.; Castillo, S.; Hoyos Ortiz, C. D.
2017-12-01
The skill of river discharge simulation using hydrological models strongly depends on the quality and spatio-temporal representativeness of precipitation during storm events. All precipitation measurement strategies have their own strengths and weaknesses that translate into discharge simulation uncertainties. Distributed hydrological models are based on evolving rainfall fields in the same time scale as the hydrological simulation. In general, rainfall measurements from a dense and well maintained rain gauge network provide a very good estimation of the total volume for each rainfall event, however, the spatial structure relies on interpolation strategies introducing considerable uncertainty in the simulation process. On the other hand, rainfall retrievals from radar reflectivity achieve a better spatial structure representation but with higher uncertainty in the surface precipitation intensity and volume depending on the vertical rainfall characteristics and radar scan strategy. To assess the impact of both rainfall measurement methodologies on hydrological simulations, and in particular the effects of the rainfall spatio-temporal variability, a numerical modeling experiment is proposed including the use of a novel QPE (Quantitative Precipitation Estimation) method based on disdrometer data in order to estimate surface rainfall from radar reflectivity. The experiment is based on the simulation of 84 storms, the hydrological simulations are carried out using radar QPE and two different interpolation methods (IDW and TIN), and the assessment of simulated peak flow. Results show significant rainfall differences between radar QPE and the interpolated fields, evidencing a poor representation of storms in the interpolated fields, which tend to miss the precise location of the intense precipitation cores, and to artificially generate rainfall in some areas of the catchment. Regarding streamflow modelling, the potential improvement achieved by using radar QPE depends on the density of the rain gauge network and its distribution relative to the precipitation events. The results for the 84 storms show a better model skill using radar QPE than the interpolated fields. Results using interpolated fields are highly affected by the dominant rainfall type and the basin scale.
Modeling of earthquake ground motion in the frequency domain
NASA Astrophysics Data System (ADS)
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation model is assessed using data from the SMART-1 array in Taiwan. The interpolation model provides an effective method to estimate ground motion at a site using recordings from stations located up to several kilometers away. Reliable estimates of differential ground motion are restricted to relatively limited ranges of frequencies and inter-station spacings.
NASA Astrophysics Data System (ADS)
Walawender, Ewelina; Walawender, Jakub P.; Ustrnul, Zbigniew
2017-02-01
The main purpose of the study is to introduce methods for mapping the spatial distribution of the occurrence of selected atmospheric phenomena (thunderstorms, fog, glaze and rime) over Poland from 1966 to 2010 (45 years). Limited in situ observations as well the discontinuous and location-dependent nature of these phenomena make traditional interpolation inappropriate. Spatially continuous maps were created with the use of geospatial predictive modelling techniques. For each given phenomenon, an algorithm identifying its favourable meteorological and environmental conditions was created on the basis of observations recorded at 61 weather stations in Poland. Annual frequency maps presenting the probability of a day with a thunderstorm, fog, glaze or rime were created with the use of a modelled, gridded dataset by implementing predefined algorithms. Relevant explanatory variables were derived from NCEP/NCAR reanalysis and downscaled with the use of a Regional Climate Model. The resulting maps of favourable meteorological conditions were found to be valuable and representative on the country scale but at different correlation ( r) strength against in situ data (from r = 0.84 for thunderstorms to r = 0.15 for fog). A weak correlation between gridded estimates of fog occurrence and observations data indicated the very local nature of this phenomenon. For this reason, additional environmental predictors of fog occurrence were also examined. Topographic parameters derived from the SRTM elevation model and reclassified CORINE Land Cover data were used as the external, explanatory variables for the multiple linear regression kriging used to obtain the final map. The regression model explained 89 % of annual frequency of fog variability in the study area. Regression residuals were interpolated via simple kriging.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
NASA Astrophysics Data System (ADS)
Šiljeg, A.; Lozić, S.; Šiljeg, S.
2015-08-01
The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2011 CFR
2011-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2012 CFR
2012-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2013 CFR
2013-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2014 CFR
2014-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
Stochastic Quantitative Reasoning for Autonomous Mission Planning
2014-04-09
points. Figure 4: Linear interpolation Table 1: Wind speed prediction information (ID:0-2 for Albany, ID:3-5 for Pittston, and ID:6-8 for JFK Airport ID...Pittston, and JFK Airport in Table 1, how can we estimate a reasonable wind speed for the current location at the current time? Figure 5: Example
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
General-Purpose Software For Computer Graphics
NASA Technical Reports Server (NTRS)
Rogers, Joseph E.
1992-01-01
NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.
Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering
NASA Astrophysics Data System (ADS)
Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping
2018-07-01
The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.
NASA Astrophysics Data System (ADS)
Wen, D. S.; Wen, H.; Shi, Y. G.; Su, B.; Li, Z. C.; Fan, G. Z.
2018-01-01
The B-spline interpolation fitting baseline in electrochemical analysis by differential pulse voltammetry was established for determining the lower concentration 2,6-di-tert-butyl p-cresol(BHT) in Jet Fuel that was less than 5.0 mg/L in the condition of the presence of the 6-tert-butyl-2,4-xylenol.The experimental results has shown that the relative errors are less than 2.22%, the sum of standard deviations less than 0.134mg/L, the correlation coefficient more than 0.9851. If the 2,6-ditert-butyl p-cresol concentration is higher than 5.0mg/L, linear fitting baseline method would be more applicable and simpler.
Gap-filling meteorological variables with Empirical Orthogonal Functions
NASA Astrophysics Data System (ADS)
Graf, Alexander
2017-04-01
Gap-filling or modelling surface-atmosphere fluxes critically depends on an, ideally continuous, availability of their meteorological driver variables, such as e.g. air temperature, humidity, radiation, wind speed and precipitation. Unlike for eddy-covariance-based fluxes, data gaps are not unavoidable for these measurements. Nevertheless, missing or erroneous data can occur in practice due to instrument or power failures, disturbance, and temporary sensor or station dismounting for e.g. agricultural management or maintenance. If stations with similar measurements are available nearby, using their data for imputation (i.e. estimating missing data) either directly, after an elevation correction or via linear regression, is usually preferred over linear interpolation or monthly mean diurnal cycles. The popular implementation of regional networks of (partly low-cost) stations increases both, the need and the potential, for such neighbour-based imputation methods. For repeated satellite imagery, Beckers and Rixen (2003) suggested an imputation method based on empirical orthogonal functions (EOFs). While exploiting the same linear relations between time series at different observation points as regression, it is able to use information from all observation points to simultaneously estimate missing data at all observation points, provided that never all observations are missing at the same time. Briefly, the method uses the ability of the first few EOFs of a data matrix to reconstruct a noise-reduced version of this matrix; iterating missing data points from an initial guess (the column-wise averages) to an optimal version determined by cross-validation. The poster presents and discusses lessons learned from adapting and applying this methodology to station data. Several years of 10-minute averages of air temperature, pressure and humidity, incoming shortwave, longwave and photosynthetically active radiation, wind speed and precipitation, measured by a regional (70 km by 20 km by 650 m elevation difference) network of 18 sites, were treated by various modifications of the method. The performance per variable and as a function of methodology, such as e.g. number of used EOFs and method to determine its optimum, period length and data transformation, is assessed by cross-validation. Beckers, J.-M., Rixen, M. (2003): EOF calculations and data filling from incomplete oceanographic datasets. J. Atmos. Ocean. Tech. 20, 1839-1856.
Ziemann, Christian; Stille, Maik; Cremers, Florian; Buzug, Thorsten M; Rades, Dirk
2018-04-17
Metal artifacts caused by high-density implants lead to incorrectly reconstructed Hounsfield units in computed tomography images. This can result in a loss of accuracy in dose calculation in radiation therapy. This study investigates the potential of the metal artifact reduction algorithms, Augmented Likelihood Image Reconstruction and linear interpolation, in improving dose calculation in the presence of metal artifacts. In order to simulate a pelvis with a double-sided total endoprosthesis, a polymethylmethacrylate phantom was equipped with two steel bars. Artifacts were reduced by applying the Augmented Likelihood Image Reconstruction, a linear interpolation, and a manual correction approach. Using the treatment planning system Eclipse™, identical planning target volumes for an idealized prostate as well as structures for bladder and rectum were defined in corrected and noncorrected images. Volumetric modulated arc therapy plans have been created with double arc rotations with and without avoidance sectors that mask out the prosthesis. The irradiation plans were analyzed for variations in the dose distribution and their homogeneity. Dosimetric measurements were performed using isocentric positioned ionization chambers. Irradiation plans based on images containing artifacts lead to a dose error in the isocenter of up to 8.4%. Corrections with the Augmented Likelihood Image Reconstruction reduce this dose error to 2.7%, corrections with linear interpolation to 3.2%, and manual artifact correction to 4.1%. When applying artifact correction, the dose homogeneity was slightly improved for all investigated methods. Furthermore, the calculated mean doses are higher for rectum and bladder if avoidance sectors are applied. Streaking artifacts cause an imprecise dose calculation within irradiation plans. Using a metal artifact correction algorithm, the planning accuracy can be significantly improved. Best results were accomplished using the Augmented Likelihood Image Reconstruction algorithm. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Morsbach, Fabian; Bickelhaupt, Sebastian; Wanner, Guido A; Krauss, Andreas; Schmidt, Bernhard; Alkadhi, Hatem
2013-07-01
To assess the value of iterative frequency split-normalized (IFS) metal artifact reduction (MAR) for computed tomography (CT) of hip prostheses. This study had institutional review board and local ethics committee approval. First, a hip phantom with steel and titanium prostheses that had inlays of water, fat, and contrast media in the pelvis was used to optimize the IFS algorithm. Second, 41 consecutive patients with hip prostheses who were undergoing CT were included. Data sets were reconstructed with filtered back projection, the IFS algorithm, and a linear interpolation MAR algorithm. Two blinded, independent readers evaluated axial, coronal, and sagittal CT reformations for overall image quality, image quality of pelvic organs, and assessment of pelvic abnormalities. CT attenuation and image noise were measured. Statistical analysis included the Friedman test, Wilcoxon signed-rank test, and Levene test. Ex vivo experiments demonstrated an optimized IFS algorithm by using a threshold of 2200 HU with four iterations for both steel and titanium prostheses. Measurements of CT attenuation of the inlays were significantly (P < .001) more accurate for IFS when compared with filtered back projection. In patients, best overall and pelvic organ image quality was found in all reformations with IFS (P < .001). Pelvic abnormalities in 11 of 41 patients (27%) were diagnosed with significantly (P = .002) higher confidence on the basis of IFS images. CT attenuation of bladder (P < .001) and muscle (P = .043) was significantly less variable with IFS compared with filtered back projection and linear interpolation MAR. In comparison with that of FBP and linear interpolation MAR, noise with IFS was similar close to and far from the prosthesis (P = .295). The IFS algorithm for CT image reconstruction significantly reduces metal artifacts from hip prostheses, improves the reliability of CT number measurements, and improves the confidence for depicting pelvic abnormalities.
Eulerian-Lagrangian solution of the convection-dispersion equation in natural coordinates
Cheng, Ralph T.; Casulli, Vincenzo; Milford, S. Nevil
1984-01-01
The vast majority of numerical investigations of transport phenomena use an Eulerian formulation for the convenience that the computational grids are fixed in space. An Eulerian-Lagrangian method (ELM) of solution for the convection-dispersion equation is discussed and analyzed. The ELM uses the Lagrangian concept in an Eulerian computational grid system. The values of the dependent variable off the grid are calculated by interpolation. When a linear interpolation is used, the method is a slight improvement over the upwind difference method. At this level of approximation both the ELM and the upwind difference method suffer from large numerical dispersion. However, if second-order Lagrangian polynomials are used in the interpolation, the ELM is proven to be free of artificial numerical dispersion for the convection-dispersion equation. The concept of the ELM is extended for treatment of anisotropic dispersion in natural coordinates. In this approach the anisotropic properties of dispersion can be conveniently related to the properties of the flow field. Several numerical examples are given to further substantiate the results of the present analysis.
Interactive algebraic grid-generation technique
NASA Technical Reports Server (NTRS)
Smith, R. E.; Wiese, M. R.
1986-01-01
An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A
2015-11-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.
2015-01-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943
NASA Astrophysics Data System (ADS)
Green, Sophie M.; Baird, Andy J.
2016-04-01
There is growing interest in estimating annual budgets of peatland-atmosphere carbon dioxide (CO2) and methane (CH4) exchanges. Such budgeting is required for calculating peatland carbon balance and the radiative forcing impact of peatlands on climate. There have been multiple approaches used to estimate CO2 budgets; however, there is a limited literature regarding the modelling of annual CH4 budgets. Using data collected from flux chamber tests in an area of blanket peatland in North Wales, we compared annual estimates of peatland-atmosphere CH4 emissions using an interpolation approach and an additive and multiplicative modelling approach. Flux-chamber measurements represent a snapshot of the conditions on a particular site. In contrast to CO2, most studies that have estimated the time-integrated flux of CH4 have not used models. Typically, linear interpolation is used to estimate CH4 fluxes during the time periods between flux-chamber measurements. It is unclear how much error is involved with such a simple integration method. CH4 fluxes generally show a rise followed by a fall through the growing season that may be captured reasonably well by interpolation, provided there are sufficiently frequent measurements. However, day-to-day and week-to-week variability is also often evident in CH4 flux data, and will not necessarily be properly represented by interpolation. Our fits of the CH4 flux models yielded r2 > 0.5 in 38 of the 48 models constructed, with 55% of these having a weighted rw2 > 0.4. Comparison of annualised CH4 fluxes estimated by interpolation and modelling reveals no correlation between the two data sets; indeed, in some cases even the sign of the flux differs. The difference between the methods seems also to be related to the size of the flux - for modest annual fluxes there is a fairly even scatter of points around the 1:1 line, whereas when the modelled fluxes are high, the corresponding interpolated fluxes tend to be low. We consider the implications of these results for the calculation of the radiative forcing effect of peatlands.
A fully 3D approach for metal artifact reduction in computed tomography.
Kratz, Barbel; Weyers, Imke; Buzug, Thorsten M
2012-11-01
In computed tomography imaging metal objects in the region of interest introduce inconsistencies during data acquisition. Reconstructing these data leads to an image in spatial domain including star-shaped or stripe-like artifacts. In order to enhance the quality of the resulting image the influence of the metal objects can be reduced. Here, a metal artifact reduction (MAR) approach is proposed that is based on a recomputation of the inconsistent projection data using a fully three-dimensional Fourier-based interpolation. The success of the projection space restoration depends sensitively on a sensible continuation of neighboring structures into the recomputed area. Fortunately, structural information of the entire data is inherently included in the Fourier space of the data. This can be used for a reasonable recomputation of the inconsistent projection data. The key step of the proposed MAR strategy is the recomputation of the inconsistent projection data based on an interpolation using nonequispaced fast Fourier transforms (NFFT). The NFFT interpolation can be applied in arbitrary dimension. The approach overcomes the problem of adequate neighborhood definitions on irregular grids, since this is inherently given through the usage of higher dimensional Fourier transforms. Here, applications up to the third interpolation dimension are presented and validated. Furthermore, prior knowledge may be included by an appropriate damping of the transform during the interpolation step. This MAR method is applicable on each angular view of a detector row, on two-dimensional projection data as well as on three-dimensional projection data, e.g., a set of sequential acquisitions at different spatial positions, projection data of a spiral acquisition, or cone-beam projection data. Results of the novel MAR scheme based on one-, two-, and three-dimensional NFFT interpolations are presented. All results are compared in projection data space and spatial domain with the well-known one-dimensional linear interpolation strategy. In conclusion, it is recommended to include as much spatial information into the recomputation step as possible. This is realized by increasing the dimension of the NFFT. The resulting image quality can be enhanced considerably.
Convectively Coupled Equatorial Waves in Reanalysis and CMIP5 Simulations
NASA Astrophysics Data System (ADS)
Castanheira, J. M.; Marques, C. A. F.
2014-12-01
Convectively coupled equatorial waves (CCEWs) are a result of the interplay between the physics and dynamics in the tropical atmosphere. As a result of such interplay, tropical convection appears often organized into synoptic to planetary-scale disturbances with time scales matching those of equatorial shallow water waves. CCEWs have broad impacts within the tropics, and their simulation in general circulation models is still problematic. Several studies showed that dispersion of those waves characteristics fit the dispersion curves derived from the Matsuno's (1966) solutions of the shallow water equations on the equatorial beta plane, namely, Kelvin, equatorial Rossby, mixed Rossby-gravity, and inertio-gravity waves. However, the more common methodology used to identify those waves is yet controversial. In this communication a new methodology for the diagnosis of CCEWs will be presented. It is based on a pre-filtering of the geopotential and horizontal wind, using 3--D normal modes functions of the adiabatic linearized equations of a resting atmosphere, followed by a space--time spectral analysis to identify the spectral regions of coherence. The methodology permits a direct detection of various types of equatorial waves, compares the dispersion characteristics of the coupled waves with the theoretical dispersion curves and allows an identification of which vertical modes are more involved in the convection. Moreover, the proposed methodology is able to show the existence of free dry waves and moist coupled waves with a common vertical structure, which is in conformity with the effect of convective heating/cooling on the effective static stability, as traduced in the gross moist stability concept. The methodology is also sensible to Doppler shifting effects. The methodology has been applied to the ERA-Interim horizontal wind and geopotential height fields and to the interpolated Outgoing Longwave Radiation (OLR) data produced by the National Oceanic and Atmospheric Administration. The same type of data (i.e. u, v, Φ and OLR) from CMIP5 historical experiments (1976-2005) were analyzed. The obtained results provide examples of the aforementioned effects and points deficiencies in the models.
NASA Astrophysics Data System (ADS)
Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf
2018-06-01
For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.
NASA Astrophysics Data System (ADS)
Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf
2018-04-01
For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.
Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
Boolean Operations with Prism Algebraic Patches
Bajaj, Chandrajit; Paoluzzi, Alberto; Portuesi, Simone; Lei, Na; Zhao, Wenqi
2009-01-01
In this paper we discuss a symbolic-numeric algorithm for Boolean operations, closed in the algebra of curved polyhedra whose boundary is triangulated with algebraic patches (A-patches). This approach uses a linear polyhedron as a first approximation of both the arguments and the result. On each triangle of a boundary representation of such linear approximation, a piecewise cubic algebraic interpolant is built, using a C1-continuous prism algebraic patch (prism A-patch) that interpolates the three triangle vertices, with given normal vectors. The boundary representation only stores the vertices of the initial triangulation and their external vertex normals. In order to represent also flat and/or sharp local features, the corresponding normal-per-face and/or normal-per-edge may be also given, respectively. The topology is described by storing, for each curved triangle, the two triples of pointers to incident vertices and to adjacent triangles. For each triangle, a scaffolding prism is built, produced by its extreme vertices and normals, which provides a containment volume for the curved interpolating A-patch. When looking for the result of a regularized Boolean operation, the 0-set of a tri-variate polynomial within each such prism is generated, and intersected with the analogous 0-sets of the other curved polyhedron, when two prisms have non-empty intersection. The intersection curves of the boundaries are traced and used to decompose each boundary into the 3 standard classes of subpatches, denoted in, out and on. While tracing the intersection curves, the locally refined triangulation of intersecting patches is produced, and added to the boundary representation. PMID:21516262
Exact Doppler broadening of tabulated cross sections. [SIGMA 1 kernel broadening method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullen, D.E.; Weisbin, C.R.
1976-07-01
The SIGMA1 kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cross section between tabulated values. The method is demonstrated to have no temperature or energy limitations and to be equally applicable to neutron or charged-particle cross sections. The method is qualitatively and quantitatively compared to contemporary approximate methods of Doppler broadening with particular emphasis on the effect of each approximation introduced.
High-Order Moving Overlapping Grid Methodology in a Spectral Element Method
NASA Astrophysics Data System (ADS)
Merrill, Brandon E.
A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.
A Thermo-Optic Propagation Modeling Capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schrader, Karl; Akau, Ron
2014-10-01
A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developedmore » for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.« less
A Meshless Method for Magnetohydrodynamics and Applications to Protoplanetary Disks
NASA Astrophysics Data System (ADS)
McNally, Colin P.
2012-08-01
This thesis presents an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. The code has been parallelized by adapting the framework provided by Gadget-2. A set of standard test problems, including one part in a million amplitude linear MHD waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities are presented. Finally we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. We provide a rigorous methodology for verifying a numerical method on two dimensional Kelvin-Helmholtz instability. The test problem was run in the Pencil Code, Athena, Enzo, NDSPHMHD, and Phurbas. A strict comparison, judgment, or ranking, between codes is beyond the scope of this work, although this work provides the mathematical framewor! k needed for such a study. Nonetheless, how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of Smoothed Particle Hydrodynamics. We then comment on the connection between this behavior and the underlying lack of zeroth-order consistency in Smoothed Particle Hydrodynamics interpolation. In astrophysical magnetohydrodynamics (MHD) and electrodynamics simulations, numerically enforcing the divergence free constraint on the magnetic field has been difficult. We observe that for point-based discretization, as used in finite-difference type and pseudo-spectral methods, the divergence free constraint can be satisfied entirely by a choice of interpolation used to define the derivatives of the magnetic field. As an example we demonstrate a new class of finite-difference type derivative operators on a regular grid which has the divergence free property. This principle clarifies the nature of magnetic monopole errors. The principles and techniques demonstrated in this chapter are particularly useful for the magnetic field, but can be applied to any vector field. Finally, we examine global zoom-in simulations of turbulent magnetorotationally unstable flow. We extract and analyze the high-current regions produced in the turbulent flow. Basic parameters of these regions are abstracted, and we build one dimensional models including non-ideal MHD, and radiative transfer. For sufficiently high temperatures, an instability resulting from the temperature dependence of the Ohmic resistivity is found. This instability concentrates current sheets, resulting in the possibility of rapid heating from temperatures on the order of 600 Kelvin to 2000 Kelvin in magnetorotationally turbulent regions of protoplanetary disks. This is a possible local mechanism for the melting of chondrules and the formation of other high-temperature materials in protoplanetary disks.
Reconstruction of reflectance data using an interpolation technique.
Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh
2009-03-01
A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Tutorial: Asteroseismic Stellar Modelling with AIMS
NASA Astrophysics Data System (ADS)
Lund, Mikkel N.; Reese, Daniel R.
The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.
An equation of state for polyurea aerogel based on multi-shock response
NASA Astrophysics Data System (ADS)
Aslam, T. D.; Gustavsen, R. L.; Bartram, B. D.
2014-05-01
The equation of state (EOS) of polyurea aerogel (PUA) is examined through both single shock Hugoniot data as well as more recent multi-shock compression experiments performed on the LANL 2-stage gas gun. A simple conservative Lagrangian numerical scheme, utilizing total variation diminishing (TVD) interpolation and an approximate Riemann solver, will be presented as well as the methodology of calibration. It will been demonstrated that a p-a model based on a Mie-Gruneisen fitting form for the solid material can reasonably replicate multi-shock compression response at a variety of initial densities; such a methodology will be presented for a commercially available polyurea aerogel.
Elliptic surface grid generation in three-dimensional space
NASA Technical Reports Server (NTRS)
Kania, Lee
1992-01-01
A methodology for surface grid generation in three dimensional space is described. The method solves a Poisson equation for each coordinate on arbitrary surfaces using successive line over-relaxation. The complete surface curvature terms were discretized and retained within the nonhomogeneous term in order to preserve surface definition; there is no need for conventional surface splines. Control functions were formulated to permit control of grid orthogonality and spacing. A method for interpolation of control functions into the domain was devised which permits their specification not only at the surface boundaries but within the interior as well. An interactive surface generation code which makes use of this methodology is currently under development.
Protein Aggregation Measurement through Electrical Impedance Spectroscopy
NASA Astrophysics Data System (ADS)
Affanni, A.; Corazza, A.; Esposito, G.; Fogolari, F.; Polano, M.
2013-09-01
The paper presents a novel methodology to measure the fibril formation in protein solutions. We designed a bench consisting of a sensor having interdigitated electrodes, a PDMS hermetic reservoir and an impedance meter automatically driven by calculator. The impedance data are interpolated with a lumped elements model and their change over time can provide information on the aggregation process. Encouraging results have been obtained by testing the methodology on K-casein, a protein of milk, with and without the addition of a drug inhibiting the aggregation. The amount of sample needed to perform this measurement is by far lower than the amount needed by fluorescence analysis.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Suboptimal schemes for atmospheric data assimilation based on the Kalman filter
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Cohn, Stephen E.
1994-01-01
This work is directed toward approximating the evolution of forecast error covariances for data assimilation. The performance of different algorithms based on simplification of the standard Kalman filter (KF) is studied. These are suboptimal schemes (SOSs) when compared to the KF, which is optimal for linear problems with known statistics. The SOSs considered here are several versions of optimal interpolation (OI), a scheme for height error variance advection, and a simplified KF in which the full height error covariance is advected. To employ a methodology for exact comparison among these schemes, a linear environment is maintained, in which a beta-plane shallow-water model linearized about a constant zonal flow is chosen for the test-bed dynamics. The results show that constructing dynamically balanced forecast error covariances rather than using conventional geostrophically balanced ones is essential for successful performance of any SOS. A posteriori initialization of SOSs to compensate for model - data imbalance sometimes results in poor performance. Instead, properly constructed dynamically balanced forecast error covariances eliminate the need for initialization. When the SOSs studied here make use of dynamically balanced forecast error covariances, the difference among their performances progresses naturally from conventional OI to the KF. In fact, the results suggest that even modest enhancements of OI, such as including an approximate dynamical equation for height error variances while leaving height error correlation structure homogeneous, go a long way toward achieving the performance of the KF, provided that dynamically balanced cross-covariances are constructed and that model errors are accounted for properly. The results indicate that such enhancements are necessary if unconventional data are to have a positive impact.
End State: The Fallacy of Modern Military Planning
2017-04-06
operational planning for non -linear, complex scenarios requires application of non -linear, advanced planning techniques such as design methodology ...cannot be approached in a linear, mechanistic manner by a universal planning methodology . Theater/global campaign plans and theater strategies offer no...strategic environments, and instead prescribes a universal linear methodology that pays no mind to strategic complexity. This universal application
Rapid Generation of Conceptual and Preliminary Design Aerodynamic Data by a Computer Aided Process
2000-06-01
methodologies, oftenpeculiar requirements such as flexibility and robustness of blended with sensible ’guess-estimated’ values. Due to peculiaremequirments...from the ’raw’ appropriate blending interpolation between the given data aerodynamic data is a process which certainly requires yields generally...like component patches are described by defining the evolution of a conic curve between two opposite boundary curves by means of blending functions
A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - II. Verifications
Choi, Sooyoung; Kong, Chidong; Lee, Deokjung; ...
2015-03-09
A new methodology has been developed recently to treat resonance self-shielding in systems for which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. The theoretical development adopts equivalence theory in both micro- and macro-level heterogeneities to provide approximate analytical expressions for the shielded cross sections, which may be interpolated from a table of resonance integrals or Bondarenko factors using a modified background cross section as the interpolation parameter. This paper describes the first implementation of the theoretical equations in a reactor analysis code. In order to reduce discrepancies caused bymore » use of the rational approximation for collision probabilities in the original derivation, a new formulation for a doubly heterogeneous Bell factor is developed in this paper to improve the accuracy of doubly heterogeneous expressions. This methodology is applied to a wide range of pin cell and assembly test problems with varying geometry parameters, material compositions, and temperatures, and the results are compared with continuous-energy Monte Carlo simulations to establish the accuracy and range of applicability of the new approach. It is shown that the new doubly heterogeneous self-shielding method including the Bell factor correction gives good agreement with reference Monte Carlo results.« less
Calibration of Safecast dose rate measurements.
Cervone, Guido; Hultquist, Carolynne
2018-10-01
A methodology is presented to calibrate contributed Safecast dose rate measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the U.S. government and contributed datasets at specific temporal windows and at corresponding spatial locations. The coefficients found for all the different temporal windows are aggregated and interpolated using quadratic regressions to generate a time dependent calibration function. Normal background radiation, decay rates, and missing values are taken into account during the analysis. Results show that the standard Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing magnitudes with time. A model is created to predict the ratio of the isotopes from the time of the accident through 2020. The proposed time dependent calibration takes into account this Cesium isotopes ratio, and it is shown to reduce the error between U.S. government and contributed data. The proposed calibration is needed through 2020, after which date the errors introduced by ignoring the presence of different isotopes will become negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Han, M; Baek, J
Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photonsmore » per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201-14-1002) supervised by the NIPA (National IT Industry Promotion Agency). Authors declares that s/he has no conflict of Interest in relation to the work in this abstract.« less
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
F. Pan; M. Stieglitz; R.B. McKane
2012-01-01
Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Except for slope, none of the other topographic characteristics can be calculated until the flow...
U.S. Competitiveness in Science and Technology
2008-01-01
Relative to GDP per Capita for Elementary and Secondary and Postsecondary Education in Selected OECD Countries, 2002...expenditures per student on elementary and secondary edu- cation are comparable with those of other industrialized nations and commensurate with the high...every country for every year; we used linear interpolation for missing years and defined the rest of the world as consisting of the following countries
2008-06-01
or just habitat area . They used linear interpolation to derive maps for each time step in the population model and population dynamics were...Metapopulation Map ............................................................................................................... 20 Figure 12. Habitat...Stephen’s kangaroo rat (SKR). In some areas of coastal sage scrub habitat short fire return intervals make the habitat suitable for the SKR while
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
RF-Based Location Using Interpolation Functions to Reduce Fingerprint Mapping
Ezpeleta, Santiago; Claver, José M.; Pérez-Solano, Juan J.; Martí, José V.
2015-01-01
Indoor RF-based localization using fingerprint mapping requires an initial training step, which represents a time consuming process. This location methodology needs a database conformed with RSSI (Radio Signal Strength Indicator) measures from the communication transceivers taken at specific locations within the localization area. But, the real world localization environment is dynamic and it is necessary to rebuild the fingerprint database when some environmental changes are made. This paper explores the use of different interpolation functions to complete the fingerprint mapping needed to achieve the sought accuracy, thereby reducing the effort in the training step. Also, different distributions of test maps and reference points have been evaluated, showing the validity of this proposal and necessary trade-offs. Results reported show that the same or similar localization accuracy can be achieved even when only 50% of the initial fingerprint reference points are taken. PMID:26516862
Quantifying Groundwater Fluctuations in the Southern High Plains with GIS and Geostatistics
NASA Astrophysics Data System (ADS)
Whitehead, B.
2008-12-01
Groundwater as a dwindling non-renewable natural resource has been an important research theme in agricultural studies coupled with human-environment interaction. This research incorporated contemporary Geographic Information System (GIS) methodologies and a universal kriging interpolator (geostatistics) to develop depth to groundwater surfaces for the southern portion of the High Plains, or Ogallala, aquifer. The variations in the interpolated surfaces were used to calculate the volume of water mined from the aquifer from 1980 to 2005. The findings suggest a nearly inverse relationship to the water withdrawal scenarios derived by the United States Geological Survey (USGS) during the Regional Aquifer System Analysis (RASA) performed in the early 1980's. These results advocate further research into regional climate change, groundwater-surface water interaction, and recharge mechanisms in the region, and provide a substantial contribution to the continuing and contentious issue concerning the environmental sustainability of the High Plains.
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
A meshless approach to thermomechanics of DC casting of aluminium billets
NASA Astrophysics Data System (ADS)
Mavrič, B.; Šarler, B.
2016-03-01
The ability to model thermomechanics in DC casting is important due to the technological challenges caused by physical phenomena such as different ingot distortions, cracking, hot tearing and residual stress. Many thermomechanical models already exist and usually take into account three contributions: elastic, thermal expansion, and viscoplastic to model the mushy zone. These models are, in a vast majority, solved by the finite element method. In the present work the elastic model that accounts for linear thermal expansion is considered. The method used for solving the model is of a novel meshless type and extends our previous meshless attempts in solving fluid mechanics problems. The solution to the problem is constructed using collocation on the overlapping subdomains, which are composed of computational nodes. Multiquadric radial basis functions, augmented by monomials, are used for the displacement interpolation. The interpolation is constructed in such a manner that it readily satisfies the boundary conditions. The discretization results in construction of a global square sparse matrix representing the system of linear equations for the displacement field. The developed method has many advantages. The system of equations can be easily constructed and efficiently solved. There is no need to perform expensive meshing of the domain and the formulation of the method is similar in two and three dimensions. Since no meshing is required, the nodes can easily be added or removed, which allows for efficient adaption of the node arrangement density. The order of convergence, estimated through an analytically solvable test, can be adjusted through the number of interpolation nodes in the subdomain, with 6 nodes being enough for the second order convergence. Simulations of axisymmetric mechanical problems, associated with low frequency electromagnetic DC casting are presented.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo
NASA Astrophysics Data System (ADS)
Kosempa, M.; Chambers, D. P.
2016-02-01
Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?
Adaptive Multilinear Tensor Product Wavelets
Weiss, Kenneth; Lindstrom, Peter
2015-08-12
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less
Interpolated Sounding and Gridded Sounding Value-Added Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toto, T.; Jensen, M.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venezian, G.; Bretschneider, C.L.
1980-08-01
This volume details a new methodology to analyze statistically the forces experienced by a structure at sea. Conventionally a wave climate is defined using a spectral function. The wave climate is described using a joint distribution of wave heights and periods (wave lengths), characterizing actual sea conditions through some measured or estimated parameters like the significant wave height, maximum spectral density, etc. Random wave heights and periods satisfying the joint distribution are then generated. Wave kinetics are obtained using linear or non-linear theory. In the case of currents a linear wave-current interaction theory of Venezian (1979) is used. The peakmore » force experienced by the structure for each individual wave is identified. Finally, the probability of exceedance of any given peak force on the structure may be obtained. A three-parameter Longuet-Higgins type joint distribution of wave heights and periods is discussed in detail. This joint distribution was used to model sea conditions at four potential OTEC locations. A uniform cylindrical pipe of 3 m diameter, extending to a depth of 550 m was used as a sample structure. Wave-current interactions were included and forces computed using Morison's equation. The drag and virtual mass coefficients were interpolated from published data. A Fortran program CUFOR was written to execute the above procedure. Tabulated and graphic results of peak forces experienced by the structure, for each location, are presented. A listing of CUFOR is included. Considerable flexibility of structural definition has been incorporated. The program can easily be modified in the case of an alternative joint distribution or for inclusion of effects like non-linearity of waves, transverse forces and diffraction.« less
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2016-11-01
A methodology to eliminate the manual work required for correcting the surface imperfections of computer-aided-design (CAD) data, will be proposed. Such a technique is indispensable for CFD analysis of industrial applications involving complex geometries. The CAD geometry is degenerated into cell-oriented values based on Cartesian grid. This enables the parallel pre-processing as well as the ability to handle 'dirty' CAD data that has gaps, overlaps, or sharp edges without necessitating any fixes. An arbitrary boundary representation is used with a dummy-cell technique based on immersed boundary (IB) method. To model the IB, a forcing term is directly imposed at arbitrary ghost cells by linear interpolation of the momentum. The mass conservation is satisfied in the approximate domain that covers fluid region except the wall including cells. Attempts to Satisfy mass conservation in the wall containing cells leads to pressure oscillations near the IB. The consequence of this approximation will be discussed through fundamental study of an LES based channel flow simulation, and high Reynolds number flow around a sphere. And, an analysis comparing our results with wind tunnel experiments of flow around a full-vehicle geometry will also be presented.
Simple automatic strategy for background drift correction in chromatographic data analysis.
Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin
2016-06-03
Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.
A geostatistical approach to data harmonization - Application to radioactivity exposure data
NASA Astrophysics Data System (ADS)
Baume, O.; Skøien, J. O.; Heuvelink, G. B. M.; Pebesma, E. J.; Melles, S. J.
2011-06-01
Environmental issues such as air, groundwater pollution and climate change are frequently studied at spatial scales that cross boundaries between political and administrative regions. It is common for different administrations to employ different data collection methods. If these differences are not taken into account in spatial interpolation procedures then biases may appear and cause unrealistic results. The resulting maps may show misleading patterns and lead to wrong interpretations. Also, errors will propagate when these maps are used as input to environmental process models. In this paper we present and apply a geostatistical model that generalizes the universal kriging model such that it can handle heterogeneous data sources. The associated best linear unbiased estimation and prediction (BLUE and BLUP) equations are presented and it is shown that these lead to harmonized maps from which estimated biases are removed. The methodology is illustrated with an example of country bias removal in a radioactivity exposure assessment for four European countries. The application also addresses multicollinearity problems in data harmonization, which arise when both artificial bias factors and natural drifts are present and cannot easily be distinguished. Solutions for handling multicollinearity are suggested and directions for further investigations proposed.
Triangle based TVD schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Durlofsky, Louis J.; Osher, Stanley; Engquist, Bjorn
1990-01-01
A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.
NASA Astrophysics Data System (ADS)
Lindley, S. J.; Walsh, T.
There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.
Manktelow, Bradley N.; Seaton, Sarah E.
2012-01-01
Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which interpolation method is to be used prior to any analysis. PMID:23029202
Pérez, Alejandro; von Lilienfeld, O Anatole
2011-08-09
Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.
MHOST: An efficient finite element program for inelastic analysis of solids and structures
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1988-01-01
An efficient finite element program for 3-D inelastic analysis of gas turbine hot section components was constructed and validated. A novel mixed iterative solution strategy is derived from the augmented Hu-Washizu variational principle in order to nodally interpolate coordinates, displacements, deformation, strains, stresses and material properties. A series of increasingly sophisticated material models incorporated in MHOST include elasticity, secant plasticity, infinitesimal and finite deformation plasticity, creep and unified viscoplastic constitutive model proposed by Walker. A library of high performance elements is built into this computer program utilizing the concepts of selective reduced integrations and independent strain interpolations. A family of efficient solution algorithms is implemented in MHOST for linear and nonlinear equation solution including the classical Newton-Raphson, modified, quasi and secant Newton methods with optional line search and the conjugate gradient method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. P. Jensen; Toto, T.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Synchronous Control Method and Realization of Automated Pharmacy Elevator
NASA Astrophysics Data System (ADS)
Liu, Xiang-Quan
Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.
Nonlinear effects of stretch on the flame front propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.
2010-10-15
In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less
Performance of three reflectance calibration methods for airborne hyperspectral spectrometer data.
Miura, Tomoaki; Huete, Alfredo R
2009-01-01
In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The "reflectance mode (RM)" method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The "linear-interpolation (LI)" method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The "continuous panel (CP)" method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
NASA Astrophysics Data System (ADS)
Gowda, P. H.
2016-12-01
Evapotranspiration (ET) is an important process in ecosystems' water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. There are efforts to develop such datasets on a regional to global scale but often faced with the limitations of spatial-temporal resolution tradeoffs in satellite remote sensing technology. In this study, we developed frameworks for generating high and medium resolution daily ET maps from Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data, respectively. For developing high resolution (30-m) daily time series ET maps with Landsat TM data, the series version of Two Source Energy Balance (TSEB) model was used to compute sensible and latent heat fluxes of soil and canopy separately. Landsat 5 (2000-2011) and Landsat 8 (2013-2014) imageries for row 28/35 and 27/36 covering central Oklahoma was used. MODIS data (2001-2014) covering Oklahoma and Texas Panhandle was used to develop medium resolution (250-m), time series daily ET maps with SEBS (Surface Energy Balance System) model. An extensive network of weather stations managed by Texas High Plains ET Network and Oklahoma Mesonet was used to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. A linear interpolation sub-model was used to estimate the daily ET between the image acquisition days. Accuracy assessment of daily ET maps were done against eddy covariance data from two grassland sites at El Reno, OK. Statistical results indicated good performance by modeling frameworks developed for deriving time series ET maps. Results indicated that the proposed ET mapping framework is suitable for deriving daily time series ET maps at regional scale with Landsat and MODIS data.
NASA Astrophysics Data System (ADS)
Smid, Marek; Costa, Ana; Pebesma, Edzer; Granell, Carlos; Bhattacharya, Devanjan
2016-04-01
Human kind is currently predominantly urban based, and the majority of ever continuing population growth will take place in urban agglomerations. Urban systems are not only major drivers of climate change, but also the impact hot spots. Furthermore, climate change impacts are commonly managed at city scale. Therefore, assessing climate change impacts on urban systems is a very relevant subject of research. Climate and its impacts on all levels (local, meso and global scale) and also the inter-scale dependencies of those processes should be a subject to detail analysis. While global and regional projections of future climate are currently available, local-scale information is lacking. Hence, statistical downscaling methodologies represent a potentially efficient way to help to close this gap. In general, the methodological reviews of downscaling procedures cover the various methods according to their application (e.g. downscaling for the hydrological modelling). Some of the most recent and comprehensive studies, such as the ESSEM COST Action ES1102 (VALUE), use the concept of Perfect Prog and MOS. Other examples of classification schemes of downscaling techniques consider three main categories: linear methods, weather classifications and weather generators. Downscaling and climate modelling represent a multidisciplinary field, where researchers from various backgrounds intersect their efforts, resulting in specific terminology, which may be somewhat confusing. For instance, the Polynomial Regression (also called the Surface Trend Analysis) is a statistical technique. In the context of the spatial interpolation procedures, it is commonly classified as a deterministic technique, and kriging approaches are classified as stochastic. Furthermore, the terms "statistical" and "stochastic" (frequently used as names of sub-classes in downscaling methodological reviews) are not always considered as synonymous, even though both terms could be seen as identical since they are referring to methods handling input modelling factors as variables with certain probability distributions. In addition, the recent development is going towards multi-step methodologies containing deterministic and stochastic components. This evolution leads to the introduction of new terms like hybrid or semi-stochastic approaches, which makes the efforts to systematically classifying downscaling methods to the previously defined categories even more challenging. This work presents a review of statistical downscaling procedures, which classifies the methods in two steps. In the first step, we describe several techniques that produce a single climatic surface based on observations. The methods are classified into two categories using an approximation to the broadest consensual statistical terms: linear and non-linear methods. The second step covers techniques that use simulations to generate alternative surfaces, which correspond to different realizations of the same processes. Those simulations are essential because there is a limited number of real observational data, and such procedures are crucial for modelling extremes. This work emphasises the link between statistical downscaling methods and the research of climate change impacts at city scale.
NASA Astrophysics Data System (ADS)
Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy
2013-04-01
Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.
NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.
2014-01-01
High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.
Design and optimization of color lookup tables on a simplex topology.
Monga, Vishal; Bala, Raja; Mo, Xuan
2012-04-01
An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.
Recent advances in numerical PDEs
NASA Astrophysics Data System (ADS)
Zuev, Julia Michelle
In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.
Blind restoration of retinal images degraded by space-variant blur with adaptive blur estimation
NASA Astrophysics Data System (ADS)
Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Å roubek, Filip
2013-11-01
Retinal images are often degraded with a blur that varies across the field view. Because traditional deblurring algorithms assume the blur to be space-invariant they typically fail in the presence of space-variant blur. In this work we consider the blur to be both unknown and space-variant. To carry out the restoration, we assume that in small regions the space-variant blur can be approximated by a space-invariant point-spread function (PSF). However, instead of deblurring the image on a per-patch basis, we extend individual PSFs by linear interpolation and perform a global restoration. Because the blind estimation of local PSFs may fail we propose a strategy for the identification of valid local PSFs and perform interpolation to obtain the space-variant PSF. The method was tested on artificial and real degraded retinal images. Results show significant improvement in the visibility of subtle details like small blood vessels.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Capacitive touch sensing : signal and image processing algorithms
NASA Astrophysics Data System (ADS)
Baharav, Zachi; Kakarala, Ramakrishna
2011-03-01
Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
NASA Technical Reports Server (NTRS)
Richey, Edward, III
1995-01-01
This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).
A New Methodology for the Extension of the Impact of Data Assimilation on Ocean Wave Prediction
2008-07-01
Assimilation method The analysis fields used were corrected by an assimilation method developed at the Norwegian Meteorological Insti- tute ( Breivik and Reistad...523–535 525 becomes equal to the solution obtained by optimal interpolation (see Bratseth 1986 and Breivik and Reistad 1994). The iterations begin with...updated accordingly. A more detailed description of the assimilation method is given in Breivik and Reistad (1994). 2.3 Kolmogorov–Zurbenko filters
Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology
NASA Technical Reports Server (NTRS)
Allen, P. A.; Wells, D. N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
NASA Astrophysics Data System (ADS)
Gutowski, Marek W.
1992-12-01
Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holling, G.H.
1994-12-31
Variable switched reluctance (VSR) motors are gaining importance for industrial applications. The paper will introduce a novel approach to simplify the computation involved in the control of VSR motors. Results are shown, that validate the approach and demonstrates the superior performance compared to tabulated control parameters with linear interpolation, which are widely used in implementations.
Terrain Aided Navigation for Remus Autonomous Underwater Vehicle
2014-06-01
22 Figure 11. Several successive sonar pings displayed together in the LTP frame .............23 Figure 12. The linear interpolation of...the sonar pings from Figure 11 .............................24 Figure 13. SIR particle filter algorithm, after [19... ping — |p k ky x .........46 Figure 26. Correlation probability distributions for four different sonar images ..............47 Figure 27. Particle
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.
1997-05-01
Introduction to Geostatistics presents practical techniques for engineers and earth scientists who routinely encounter interpolation and estimation problems when analyzing data from field observations. Requiring no background in statistics, and with a unique approach that synthesizes classic and geostatistical methods, this book offers linear estimation methods for practitioners and advanced students. Well illustrated with exercises and worked examples, Introduction to Geostatistics is designed for graduate-level courses in earth sciences and environmental engineering.
Comparison of BiLinearly Interpolated Subpixel Sensitivity Mapping and Pixel-Level Decorrelation
NASA Astrophysics Data System (ADS)
Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Foster, Andrew S.; Deming, Drake; WASP Consortium
2016-10-01
Exoplanet eclipse signals are weaker than the systematics present in the Spitzer Space Telescope's Infrared Array Camera (IRAC), and thus the correction method can significantly impact a measurement. BiLinearly Interpolated Subpixel Sensitivity (BLISS) mapping calculates the sensitivity of the detector on a subpixel grid and corrects the photometry for any sensitivity variations. Pixel-Level Decorrelation (PLD) removes the sensitivity variations by considering the relative intensities of the pixels around the source. We applied both methods to WASP-29b, a Saturn-sized planet with a mass of 0.24 ± 0.02 Jupiter masses and a radius of 0.84 ± 0.06 Jupiter radii, which we observed during eclipse twice with the 3.6 µm and once with the 4.5 µm channels of IRAC aboard Spitzer in 2010 and 2011 (programs 60003 and 70084, respectively). We compared the results of BLISS and PLD, and comment on each method's ability to remove time-correlated noise. WASP-29b exhibits a strong detection at 3.6 µm and no detection at 4.5 µm. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Size-Dictionary Interpolation for Robot's Adjustment.
Daneshmand, Morteza; Aabloo, Alvo; Anbarjafari, Gholamreza
2015-01-01
This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps toward completion of the whole realistic online fitting package is provided.
NASA Astrophysics Data System (ADS)
Qu, Rui; Liu, Shu-Shen; Zheng, Qiao-Feng; Li, Tong
2017-03-01
Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
Color management with a hammer: the B-spline fitter
NASA Astrophysics Data System (ADS)
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...
2017-10-10
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
Web-based visualization of gridded dataset usings OceanBrowser
NASA Astrophysics Data System (ADS)
Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie
2015-04-01
OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).
Qu, Rui; Liu, Shu-Shen; Zheng, Qiao-Feng; Li, Tong
2017-01-01
Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures. PMID:28287626
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA
NASA Technical Reports Server (NTRS)
Ledbetter, F. E.
1994-01-01
Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.
Groundwater contaminant plume maps and volumes, 100-K and 100-N Areas, Hanford Site, Washington
Johnson, Kenneth H.
2016-09-27
This study provides an independent estimate of the areal and volumetric extent of groundwater contaminant plumes which are affected by waste disposal in the 100-K and 100-N Areas (study area) along the Columbia River Corridor of the Hanford Site. The Hanford Natural Resource Trustee Council requested that the U.S. Geological Survey perform this interpolation to assess the accuracy of delineations previously conducted by the U.S. Department of Energy and its contractors, in order to assure that the Natural Resource Damage Assessment could rely on these analyses. This study is based on previously existing chemical (or radionuclide) sampling and analysis data downloaded from publicly available Hanford Site Internet sources, geostatistically selected and interpreted as representative of current (from 2009 through part of 2012) but average conditions for groundwater contamination in the study area. The study is limited in scope to five contaminants—hexavalent chromium, tritium, nitrate, strontium-90, and carbon-14, all detected at concentrations greater than regulatory limits in the past.All recent analytical concentrations (or activities) for each contaminant, adjusted for radioactive decay, non-detections, and co-located wells, were converted to log-normal distributions and these transformed values were averaged for each well location. The log-normally linearized well averages were spatially interpolated on a 50 × 50-meter (m) grid extending across the combined 100-N and 100-K Areas study area but limited to avoid unrepresentative extrapolation, using the minimum curvature geostatistical interpolation method provided by SURFER®data analysis software. Plume extents were interpreted by interpolating the log-normally transformed data, again using SURFER®, along lines of equal contaminant concentration at an appropriate established regulatory concentration . Total areas for each plume were calculated as an indicator of relative environmental damage. These plume extents are shown graphically and in tabular form for comparison to previous estimates. Plume data also were interpolated to a finer grid (10 × 10 m) for some processing, particularly to estimate volumes of contaminated groundwater. However, hydrogeologic transport modeling was not considered for the interpolation. The compilation of plume extents for each contaminant also allowed estimates of overlap of the plumes or areas with more than one contaminant above regulatory standards.A mapping of saturated aquifer thickness also was derived across the 100-K and 100–N study area, based on the vertical difference between the groundwater level (water table) at the top and the altitude of the top of the Ringold Upper Mud geologic unit, considered the bottom of the uppermost unconfined aquifer. Saturated thickness was calculated for each cell in the finer (10 × 10 m) grid. The summation of the cells’ saturated thickness values within each polygon of plume regulatory exceedance provided an estimate of the total volume of contaminated aquifer, and the results also were checked using a SURFER® volumetric integration procedure. The total volume of contaminated groundwater in each plume was derived by multiplying the aquifer saturated thickness volume by a locally representative value of porosity (0.3).Estimates of the uncertainty of the plume delineation also are presented. “Upper limit” plume delineations were calculated for each contaminant using the same procedure as the “average” plume extent except with values at each well that are set at a 95-percent upper confidence limit around the log-normally transformed mean concentrations, based on the standard error for the distribution of the mean value in that well; “lower limit” plumes are calculated at a 5-percent confidence limit around the geometric mean. These upper- and lower-limit estimates are considered unrealistic because the statistics were increased or decreased at each well simultaneously and were not adjusted for correlation among the well distributions (i.e., it is not realistic that all wells would be high simultaneously). Sources of the variability in the distributions used in the upper- and lower-extent maps include time varying concentrations and analytical errors.The plume delineations developed in this study are similar to the previous plume descriptions developed by U.S. Department of Energy and its contractors. The differences are primarily due to data selection and interpolation methodology. The differences in delineated plumes are not sufficient to result in the Hanford Natural Resource Trustee Council adjusting its understandings of contaminant impact or remediation.
Deem, J F; Manning, W H; Knack, J V; Matesich, J S
1989-09-01
A program for the automatic extraction of jitter (PAEJ) was developed for the clinical measurement of pitch perturbations using a microcomputer. The program currently includes 12 implementations of an algorithm for marking the boundary criteria for a fundamental period of vocal fold vibration. The relative sensitivity of these extraction procedures for identifying the pitch period was compared using sine waves. Data obtained to date provide information for each procedure concerning the effects of waveform peakedness and slope, sample duration in cycles, noise level of the analysis system with both direct and tape recorded input, and the influence of interpolation. Zero crossing extraction procedures provided lower jitter values regardless of sine wave frequency or sample duration. The procedures making use of positive- or negative-going zero crossings with interpolation provided the lowest measures of jitter with the sine wave stimuli. Pilot data obtained with normal-speaking adults indicated that jitter measures varied as a function of the speaker, vowel, and sample duration.
MRI Superresolution Using Self-Similarity and Image Priors
Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2010-01-01
In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094
Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs
NASA Technical Reports Server (NTRS)
Howell, Lauren R.; Allen, B. Danette
2016-01-01
A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.
NASA Astrophysics Data System (ADS)
Ortegón Gallego, A.
2010-09-01
In this paper, we describe the calculation methodology we used to determine the spatial structure of solar irradiation over a very complex orography, such as the Canary archipelago, that is broken in seven islands, with only 7500 km2, and with heights in some of the islands upper than 1800 m, that reach to 3718 m in the case of Tenerife island. Starting with the method of Cumulative Semivariograms1, already used to face the irradiation spatial interpolation problem, although not for a complex orography. In this sense, some major modifications are introduced to deal with our needs, which can be summarized as: a) interpolation of clearness index data (Kcd, defined as the division of the global horizontal data, between the corresponding clear sky global horizontal values, obtained from a suitable model) instead of solar irradiation data; b) topographic considerations are included in the clear sky model, such as topographics shadows. This impacts directly over direct component of solar irradiation, and has a minor effect over the diffuse component, arising from a non plane visible horizon; c) the meteorological stations are selected by a criteria of weather proximity, instead of geographic proximity as it was proposed in the original methodology of Cumulative Semivariograms; d) the final result is obtained as the composition of various maps obtained from error minimization within a neighborhood of each available station, instead of using irradiation isolines. A preliminary result with data registered only by Canary Islands Institute of Technology's stations, spread over the whole archipelago, is showed. From our results we can see both, the power of the developed methodology and some limitations due to the extremely complex orography as it is the case of Canary Islands, which consists of a wide variety of microclimate regions. Whenever additional information is available, either in the form of empiric knowledge of the local weather, or in the form of other available radiometric data sources, the results do improve. In the case that all available stations are already used, the empirical knowledge of weather conditions can be introduced in our model by means of a strain parameter that modifies the statistical weight associated to the corresponding station in a given neighborhood. On the other hand, whenever it is possible to increase the spatial density of data sources, that is, to incorporate data from others stations, the result improves provided that data quality is good enough. To validate the results obtained with our methodology, we calculate the error between the estimated solar irradiation at the location of the stations with the records obtained by them.
PyCCF: Python Cross Correlation Function for reverberation mapping studies
NASA Astrophysics Data System (ADS)
Sun, Mouyuan; Grier, C. J.; Peterson, B. M.
2018-05-01
PyCCF emulates a Fortran program written by B. Peterson for use with reverberation mapping. The code cross correlates two light curves that are unevenly sampled using linear interpolation and measures the peak and centroid of the cross-correlation function. In addition, it is possible to run Monto Carlo iterations using flux randomization and random subset selection (RSS) to produce cross-correlation centroid distributions to estimate the uncertainties in the cross correlation results.
Effect of data gaps on correlation dimension computed from light curves of variable stars
NASA Astrophysics Data System (ADS)
George, Sandip V.; Ambika, G.; Misra, R.
2015-11-01
Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less
NASA Technical Reports Server (NTRS)
Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)
2001-01-01
In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
A bivariate rational interpolation with a bi-quadratic denominator
NASA Astrophysics Data System (ADS)
Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu
2006-10-01
In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.
Biped Robot Gait Planning Based on 3D Linear Inverted Pendulum Model
NASA Astrophysics Data System (ADS)
Yu, Guochen; Zhang, Jiapeng; Bo, Wu
2018-01-01
In order to optimize the biped robot’s gait, the biped robot’s walking motion is simplify to the 3D linear inverted pendulum motion mode. The Center of Mass (CoM) locus is determined from the relationship between CoM and the Zero Moment Point (ZMP) locus. The ZMP locus is planned in advance. Then, the forward gait and lateral gait are simplified as connecting rod structure. Swing leg trajectory using B-spline interpolation. And the stability of the walking process is discussed in conjunction with the ZMP equation. Finally the system simulation is carried out under the given conditions to verify the validity of the proposed planning method.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F
2012-05-01
Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Wells, Douglas N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
A revised ground-motion and intensity interpolation scheme for shakemap
Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.
2010-01-01
We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Thermochemical Data for Propellant Ingredients and their Products of Explosion
1949-12-01
oases except perhaps at temperatures below 2000°K. The logarithms of all the equilibrium constants except Ko have been tabulated since these logarithms...have almost constant first differences. Linear interpolation may lead to an error of a unit or two in the third decimal place for Ko but the...dissociation products OH, H and KO will be formed and at still higher temperatures the other dissociation products 0*, 0, N and C will begin to appear
Evidence from Central Mexico Supporting the Younger Dryas Extraterrestrial Impact Hypothesis
2012-03-05
identified glassy spherules, CSps, high- temperature melt- rocks , shocked quartz, and a YDB black mat analogue in the Venezuelan Andes. Those authors...debate, we have examined a diverse assemblage of YDB markers at Lake Cuitzeo using a more comprehensive array of analytical techniques than in previous...accelerator mass spectroscopy (AMS) 14C dates on bulk sediment and used in a linear interpolation with the YD onset identified at approximately 2.8 m. To
Functional Techniques for Data Analysis
NASA Technical Reports Server (NTRS)
Tomlinson, John R.
1997-01-01
This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
NASA Astrophysics Data System (ADS)
Xu, Jianxin; Liang, Hong
2013-07-01
Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Estimation of water surface elevations for the Everglades, Florida
Palaseanu, Monica; Pearlstine, Leonard
2008-01-01
The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring gages and modeling methods that provides scientists and managers with current (2000–present) online water surface and water depth information for the freshwater domain of the Greater Everglades. This integrated system presents data on a 400-m square grid to assist in (1) large-scale field operations; (2) integration of hydrologic and ecologic responses; (3) supporting biological and ecological assessment of the implementation of the Comprehensive Everglades Restoration Plan (CERP); and (4) assessing trophic-level responses to hydrodynamic changes in the Everglades.This paper investigates the radial basis function multiquadric method of interpolation to obtain a continuous freshwater surface across the entire Everglades using radio-transmitted data from a network of water-level gages managed by the US Geological Survey (USGS), the South Florida Water Management District (SFWMD), and the Everglades National Park (ENP). Since the hydrological connection is interrupted by canals and levees across the study area, boundary conditions were simulated by linearly interpolating along those features and integrating the results together with the data from marsh stations to obtain a continuous water surface through multiquadric interpolation. The absolute cross-validation errors greater than 5 cm correlate well with the local outliers and the minimum distance between the closest stations within 2000-m radius, but seem to be independent of vegetation or season.
BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization
NASA Astrophysics Data System (ADS)
Lee, Eun Young; Novotny, Johannes; Wagreich, Michael
2016-06-01
Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
NASA Astrophysics Data System (ADS)
Hodge, R.; Brasington, J.; Richards, K.
2009-04-01
The ability to collect 3D elevation data at mm-resolution from in-situ natural surfaces, such as fluvial and coastal sediments, rock surfaces, soils and dunes, is beneficial for a range of geomorphological and geological research. From these data the properties of the surface can be measured, and Digital Terrain Models (DTM) can be constructed. Terrestrial Laser Scanning (TLS) can collect quickly such 3D data with mm-precision and mm-spacing. This paper presents a methodology for the collection and processing of such TLS data, and considers how the errors in this TLS data can be quantified. TLS has been used to collect elevation data from fluvial gravel surfaces. Data were collected from areas of approximately 1 m2, with median grain sizes ranging from 18 to 63 mm. Errors are inherent in such data as a result of the precision of the TLS, and the interaction of factors including laser footprint, surface topography, surface reflectivity and scanning geometry. The methodology for the collection and processing of TLS data from complex surfaces like these fluvial sediments aims to minimise the occurrence of, and remove, such errors. The methodology incorporates taking scans from multiple scanner locations, averaging repeat scans, and applying a series of filters to remove erroneous points. Analysis of 2.5D DTMs interpolated from the processed data has identified geomorphic properties of the gravel surfaces, including the distribution of surface elevations, preferential grain orientation and grain imbrication. However, validation of the data and interpolated DTMs is limited by the availability of techniques capable of collecting independent elevation data of comparable quality. Instead, two alternative approaches to data validation are presented. The first consists of careful internal validation to optimise filter parameter values during data processing combined with a series of laboratory experiments. In the experiments, TLS data were collected from a sphere and planes with different reflectivities to measure the accuracy and precision of TLS data of these geometrically simple objects. Whilst this first approach allows the maximum precision of TLS data from complex surfaces to be estimated, it cannot quantify the distribution of errors within the TLS data and across the interpolated DTMs. The second approach enables this by simulating the collection of TLS data from complex surfaces of a known geometry. This simulated scanning has been verified through systematic comparison with laboratory TLS data. Two types of surface geometry have been investigated: simulated regular arrays of uniform spheres used to analyse the effect of sphere size; and irregular beds of spheres with the same grain size distribution as the fluvial gravels, which provide a comparable complex geometry to the field sediment surfaces. A series of simulated scans of these surfaces has enabled the magnitude and spatial distribution of errors in the interpolated DTMs to be quantified, as well as demonstrating the utility of the different processing stages in removing errors from TLS data. As well as demonstrating the application of simulated scanning as a technique to quantify errors, these results can be used to estimate errors in comparable TLS data.
Numerical proof of stability of roll waves in the small-amplitude limit for inclined thin film flow
NASA Astrophysics Data System (ADS)
Barker, Blake
2014-10-01
We present a rigorous numerical proof based on interval arithmetic computations categorizing the linearized and nonlinear stability of periodic viscous roll waves of the KdV-KS equation modeling weakly unstable flow of a thin fluid film on an incline in the small-amplitude KdV limit. The argument proceeds by verification of a stability condition derived by Bar-Nepomnyashchy and Johnson-Noble-Rodrigues-Zumbrun involving inner products of various elliptic functions arising through the KdV equation. One key point in the analysis is a bootstrap argument balancing the extremely poor sup norm bounds for these functions against the extremely good convergence properties for analytic interpolation in order to obtain a feasible computation time. Another is the way of handling analytic interpolation in several variables by a two-step process carving up the parameter space into manageable pieces for rigorous evaluation. These and other general aspects of the analysis should serve as blueprints for more general analyses of spectral stability.
Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps
NASA Astrophysics Data System (ADS)
Gundogdu, Ismail Bulent
2017-01-01
Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.
A modified dual-level algorithm for large-scale three-dimensional Laplace and Helmholtz equation
NASA Astrophysics Data System (ADS)
Li, Junpu; Chen, Wen; Fu, Zhuojia
2018-01-01
A modified dual-level algorithm is proposed in the article. By the help of the dual level structure, the fully-populated interpolation matrix on the fine level is transformed to a local supported sparse matrix to solve the highly ill-conditioning and excessive storage requirement resulting from fully-populated interpolation matrix. The kernel-independent fast multipole method is adopted to expediting the solving process of the linear equations on the coarse level. Numerical experiments up to 2-million fine-level nodes have successfully been achieved. It is noted that the proposed algorithm merely needs to place 2-3 coarse-level nodes in each wavelength per direction to obtain the reasonable solution, which almost down to the minimum requirement allowed by the Shannon's sampling theorem. In the real human head model example, it is observed that the proposed algorithm can simulate well computationally very challenging exterior high-frequency harmonic acoustic wave propagation up to 20,000 Hz.
NASA Astrophysics Data System (ADS)
Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu
2015-12-01
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Comparison of global sst analyses for atmospheric data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phoebus, P.A.; Cummings, J.A.
1995-03-17
Traditionally, atmospheric models were executed using a climatological estimate of the sea surface temperature (SST) to define the marine boundary layer. More recently, particularly since the deployment of remote sensing instruments and the advent of multichannel SST observations atmospheric models have been improved by using more timely estimates of the actual state of the ocean. Typically, some type of objective analysis is performed using the data from satellites along with ship, buoy, and bathythermograph observations, and perhaps even climatology, to produce a weekly or daily analysis of global SST. Some of the earlier efforts to produce real-time global temperature analysesmore » have been described by Clancy and Pollak (1983) and Reynolds (1988). However, just as new techniques have been developed for atmospheric data assimilation, improvements have been made to ocean data assimilation systems as well. In 1988, the U.S. Navy`s Fleet Numerical Meteorology and Oceanography Center (FNMOC) implemented a global three-dimensional ocean temperature analysis that was based on the optimum interpolation methodology (Clancy et al., 1990). This system, the Optimum Thermal Interpolation System (OTIS 1.0), was initially distributed on a 2.50 resolution grid, and was later modified to generate fields on a 1.250 grid (OTIS 1.1; Clancy et al., 1992). Other optimum interpolation-based analyses (OTIS 3.0) were developed by FNMOC to perform high-resolution three-dimensional ocean thermal analyses in areas with strong frontal gradients and clearly defined water mass characteristics.« less
Souza, W.R.
1999-01-01
This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1
3D Thermal Stratification of Koycegiz Lake, Turkey.
NASA Astrophysics Data System (ADS)
Gurcan, Tugba; Kurtulus, Bedri; Avsar, Ozgur; Avsar, Ulas
2017-04-01
Water temperature in lakes, streams and coastal areas is an important indicator for several purposes (water quality, aquatic organism, land use, etc..). There are over a hundred lakes in Turkey. Most of them locates in the area known as the Lake District in southwestern Turkey. The Study area is located at the south and southwest part of Turkey in Muǧla region. The present study focuses on determining possible thermocline changes in Lake Koyceǧiz by in-situ measurements. The measurement were done by two snapshot campaign at July and August 2013. Using Mugla Sıtkı Kocman University geological engineering floating platform, temperature, specific conductance, salinity and depth values were measured with the YSI 6600 and Horiba U2 devices in surface and depth of Lake Köyceǧiz at specific grid. When the depth of the water and the coordinates were measured by GPS. Scattered data interpolation is used to perform interpolation on a scattered dataset that resides in 3D space. The 3D temperature color mesh grid were generated by using Delaunay triangulation and Natural neighbor interpolation methodology. At the end of the study a 3D conceptual lake temperature dynamics model was reconstructed using MATLAB functions. The results show that Koycegiz Lake is a meromictic lake and has a significance decrease of Temperature at 7m of depth.In this regard, we would like also to thank TUBITAK project (112Y137), French Embassy in Turkey and Sıtkı Kocman Foundation for their financial support.
Chemical association in simple models of molecular and ionic fluids. III. The cavity function
NASA Astrophysics Data System (ADS)
Zhou, Yaoqi; Stell, George
1992-01-01
Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None
Deep Wavelet Scattering for Quantum Energy Regression
NASA Astrophysics Data System (ADS)
Hirn, Matthew
Physical functionals are usually computed as solutions of variational problems or from solutions of partial differential equations, which may require huge computations for complex systems. Quantum chemistry calculations of ground state molecular energies is such an example. Indeed, if x is a quantum molecular state, then the ground state energy E0 (x) is the minimum eigenvalue solution of the time independent Schrödinger Equation, which is computationally intensive for large systems. Machine learning algorithms do not simulate the physical system but estimate solutions by interpolating values provided by a training set of known examples {(xi ,E0 (xi) } i <= n . However, precise interpolations may require a number of examples that is exponential in the system dimension, and are thus intractable. This curse of dimensionality may be circumvented by computing interpolations in smaller approximation spaces, which take advantage of physical invariants. Linear regressions of E0 over a dictionary Φ ={ϕk } k compute an approximation E 0 as: E 0 (x) =∑kwkϕk (x) , where the weights {wk } k are selected to minimize the error between E0 and E 0 on the training set. The key to such a regression approach then lies in the design of the dictionary Φ. It must be intricate enough to capture the essential variability of E0 (x) over the molecular states x of interest, while simple enough so that evaluation of Φ (x) is significantly less intensive than a direct quantum mechanical computation (or approximation) of E0 (x) . In this talk we present a novel dictionary Φ for the regression of quantum mechanical energies based on the scattering transform of an intermediate, approximate electron density representation ρx of the state x. The scattering transform has the architecture of a deep convolutional network, composed of an alternating sequence of linear filters and nonlinear maps. Whereas in many deep learning tasks the linear filters are learned from the training data, here the physical properties of E0 (invariance to isometric transformations of the state x, stable to deformations of x) are leveraged to design a collection of linear filters ρx *ψλ for an appropriate wavelet ψ. These linear filters are composed with the nonlinear modulus operator, and the process is iterated upon so that at each layer stable, invariant features are extracted: ϕk (x) = ∥ | | ρx *ψλ1 | * ψλ2 | * ... *ψλm ∥ , k = (λ1 , ... ,λm) , m = 1 , 2 , ... The scattering transform thus encodes not only interactions at multiple scales (in the first layer, m = 1), but also features that encode complex phenomena resulting from a cascade of interactions across scales (in subsequent layers, m >= 2). Numerical experiments give state of the art accuracy over data bases of organic molecules, while theoretical results guarantee performance for the component of the ground state energy resulting from Coulombic interactions. Supported by the ERC InvariantClass 320959 Grant.
Application of a Probalistic Sizing Methodology for Ceramic Structures
NASA Astrophysics Data System (ADS)
Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit
2012-07-01
Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.
The SAMI Galaxy Survey: cubism and covariance, putting round pegs into square holes
NASA Astrophysics Data System (ADS)
Sharp, R.; Allen, J. T.; Fogarty, L. M. R.; Croom, S. M.; Cortese, L.; Green, A. W.; Nielsen, J.; Richards, S. N.; Scott, N.; Taylor, E. N.; Barnes, L. A.; Bauer, A. E.; Birchall, M.; Bland-Hawthorn, J.; Bloom, J. V.; Brough, S.; Bryant, J. J.; Cecil, G. N.; Colless, M.; Couch, W. J.; Drinkwater, M. J.; Driver, S.; Foster, C.; Goodwin, M.; Gunawardhana, M. L. P.; Ho, I.-T.; Hampton, E. J.; Hopkins, A. M.; Jones, H.; Konstantopoulos, I. S.; Lawrence, J. S.; Leslie, S. K.; Lewis, G. F.; Liske, J.; López-Sánchez, Á. R.; Lorente, N. P. F.; McElroy, R.; Medling, A. M.; Mahajan, S.; Mould, J.; Parker, Q.; Pracy, M. B.; Obreschkow, D.; Owers, M. S.; Schaefer, A. L.; Sweet, S. M.; Thomas, A. D.; Tonini, C.; Walcher, C. J.
2015-01-01
We present a methodology for the regularization and combination of sparse sampled and irregularly gridded observations from fibre-optic multiobject integral field spectroscopy. The approach minimizes interpolation and retains image resolution on combining subpixel dithered data. We discuss the methodology in the context of the Sydney-AAO multiobject integral field spectrograph (SAMI) Galaxy Survey underway at the Anglo-Australian Telescope. The SAMI instrument uses 13 fibre bundles to perform high-multiplex integral field spectroscopy across a 1° diameter field of view. The SAMI Galaxy Survey is targeting ˜3000 galaxies drawn from the full range of galaxy environments. We demonstrate the subcritical sampling of the seeing and incomplete fill factor for the integral field bundles results in only a 10 per cent degradation in the final image resolution recovered. We also implement a new methodology for tracking covariance between elements of the resulting data cubes which retains 90 per cent of the covariance information while incurring only a modest increase in the survey data volume.
A monolithic Lagrangian approach for fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Ryzhakov, P. B.; Rossi, R.; Idelsohn, S. R.; Oñate, E.
2010-11-01
Current work presents a monolithic method for the solution of fluid-structure interaction problems involving flexible structures and free-surface flows. The technique presented is based upon the utilization of a Lagrangian description for both the fluid and the structure. A linear displacement-pressure interpolation pair is used for the fluid whereas the structure utilizes a standard displacement-based formulation. A slight fluid compressibility is assumed that allows to relate the mechanical pressure to the local volume variation. The method described features a global pressure condensation which in turn enables the definition of a purely displacement-based linear system of equations. A matrix-free technique is used for the solution of such linear system, leading to an efficient implementation. The result is a robust method which allows dealing with FSI problems involving arbitrary variations in the shape of the fluid domain. The method is completely free of spurious added-mass effects.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
A 41 ps ASIC time-to-digital converter for physics experiments
NASA Astrophysics Data System (ADS)
Russo, Stefano; Petra, Nicola; De Caro, Davide; Barbarino, Giancarlo; Strollo, Antonio G. M.
2011-12-01
We present a novel Time-to-Digital (TDC) converter for physics experiments. Proposed TDC is based on a synchronous counter and an asynchronous fine interpolator. The fine part of the measurement is obtained using NORA inverters that provide improved resolution. A prototype IC was fabricated in 180 nm CMOS technology. Experimental measurements show that proposed TDC features 41 ps resolution associated with 0.35LSB differential non-linearity, 0.77LSB integral non-linearity and a negligible single shot precision. The whole dynamic range is equal to 18 μs. The proposed TDC is designed using a flash architecture that reduces dead time. Data reported in the paper show that our design is well suited for present and future particle physics experiments.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
1976-08-01
target type. A variable used to store the value of IT. UNITS none none none PK PKK POWER PPEAF(7) The square of the range at which the...targets in the column to provide for removal of the three damaged targets. A standard linear interpolation technique is used to pro - vide for the...considered to be shielded by terrain features. provided into which are pro - priority Whereas intruder/mine encounters are position (or dis- tance
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
1985-02-01
0 Here Q denotes the midplane of the plate ?assumed to be a Lipschitzian) with a smooth boundary ", and H (Q) and H (Q) are the Hilbert spaces of...using a reproducing kernel Hilbert space approach, Weinert [8,9] et al, developed a structural correspondence between spline interpolation and linear...597 A Mesh Moving Technique for Time Dependent Partial Differential Equations in Two Space Dimensions David C. Arney and Joseph
NASA Technical Reports Server (NTRS)
Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)
1998-01-01
A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.
Elimination of numerical diffusion in 1 - phase and 2 - phase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajamaeki, M.
1997-07-01
The new hydraulics solution method PLIM (Piecewise Linear Interpolation Method) is capable of avoiding the excessive errors, numerical diffusion and also numerical dispersion. The hydraulics solver CFDPLIM uses PLIM and solves the time-dependent one-dimensional flow equations in network geometry. An example is given for 1-phase flow in the case when thermal-hydraulics and reactor kinetics are strongly coupled. Another example concerns oscillations in 2-phase flow. Both the example computations are not possible with conventional methods.
Tadić, Jovan M; Michalak, Anna M; Iraci, Laura; Ilić, Velibor; Biraud, Sébastien C; Feldman, Daniel R; Bui, Thaopaul; Johnson, Matthew S; Loewenstein, Max; Jeong, Seongeun; Fischer, Marc L; Yates, Emma L; Ryoo, Ju-Mee
2017-09-05
In this study, we explore observational, experimental, methodological, and practical aspects of the flux quantification of greenhouse gases from local point sources by using in situ airborne observations, and suggest a series of conceptual changes to improve flux estimates. We address the major sources of uncertainty reported in previous studies by modifying (1) the shape of the typical flight path, (2) the modeling of covariance and anisotropy, and (3) the type of interpolation tools used. We show that a cylindrical flight profile offers considerable advantages compared to traditional profiles collected as curtains, although this new approach brings with it the need for a more comprehensive subsequent analysis. The proposed flight pattern design does not require prior knowledge of wind direction and allows for the derivation of an ad hoc empirical correction factor to partially alleviate errors resulting from interpolation and measurement inaccuracies. The modified approach is applied to a use-case for quantifying CH 4 emission from an oil field south of San Ardo, CA, and compared to a bottom-up CH 4 emission estimate.
NASA Astrophysics Data System (ADS)
Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.
2017-01-01
The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.
New gridded database of clear-sky solar radiation derived from ground-based observations over Europe
NASA Astrophysics Data System (ADS)
Bartok, Blanka; Wild, Martin; Sanchez-Lorenzo, Arturo; Hakuba, Maria Z.
2017-04-01
Since aerosols modify the entire energy balance of the climate system through different processes, assessments regarding aerosol multiannual variability are highly required by the climate modelling community. Because of the scarcity of long-term direct aerosol measurements, the retrieval of aerosol data/information from other type of observations or satellite measurements are very relevant. One approach frequently used in the literature is analyze of the clear-sky solar radiation which offer a better overview of changes in aerosol content. In the study first two empirical methods are elaborated in order to separate clear-sky situations from observed values of surface solar radiation available at the World Radiation Data Center (WRDC), St. Petersburg. The daily data has been checked for temporal homogeneity by applying the MASH method (Szentimrey, 2003). In the first approach, clear sky situations are detected based on clearness index, namely the ratio of the surface solar radiation to the extraterrestrial solar irradiation. In the second approach the observed values of surface solar radiation are compared to the climatology of clear-sky surface solar radiation calculated by the MAGIC radiation code (Muller et al. 2009). In both approaches the clear-sky radiation values highly depend on the applied thresholds. In order to eliminate this methodological error a verification of clear-sky detection is envisaged through a comparison with the values obtained by a high time resolution clear-sky detection and interpolation algorithm (Long and Ackermann, 2000) making use of the high quality data from the Baseline Surface Radiation Network (BSRN). As the consequences clear-sky data series are obtained for 118 European meteorological stations. Next a first attempt has been done in order to interpolate the point-wise clear-sky radiation data by applying the MISH (Meteorological Interpolation based on Surface Homogenized Data Basis) method for the spatial interpolation of surface meteorological elements developed at the Hungarian Meteorological Service (Szentimrey 2007). In this way new gridded database of clear-sky solar radiation is created suitable for further investigations regarding the role of aerosols in the energy budget, and also for validations of climate model outputs. References 1. Long CN, Ackerman TP. 2000. Identification of clear skies from broadband pyranometer measurements and calculation of downwelling shortwave cloud effects, J. Geophys. Res., 105(D12), 15609-15626, doi:10.1029/2000JD900077. 2. Mueller R, Matsoukas C, Gratzki A, Behr H, Hollmann R. 2009. The CM-SAF operational scheme for the satellite based retrieval of solar surface irradiance - a LUT based eigenvector hybrid approach, Remote Sensing of Environment, 113 (5), 1012-1024, doi:10.1016/j.rse.2009. 01.012 3. Szentimrey T. 2014. Multiple Analysis of Series for Homogenization (MASHv3.03), Hungarian Meteorological Service, https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/ 4. Szentimrey T. Bihari Z. 2014: Meteorological Interpolation based on Surface Homogenized Data Basis (MISHv1.03) https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
EOS Interpolation and Thermodynamic Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammel, J. Tinka
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
Using geographical information systems and cartograms as a health service quality improvement tool.
Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek
2014-07-01
Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE
NASA Technical Reports Server (NTRS)
Vu, B. T.
1994-01-01
TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.
NASA Astrophysics Data System (ADS)
Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman
2016-05-01
Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Normal mode-guided transition pathway generation in proteins
Lee, Byung Ho; Seo, Sangjae; Kim, Min Hyeok; Kim, Youngjin; Jo, Soojin; Choi, Moon-ki; Lee, Hoomin; Choi, Jae Boong
2017-01-01
The biological function of proteins is closely related to its structural motion. For instance, structurally misfolded proteins do not function properly. Although we are able to experimentally obtain structural information on proteins, it is still challenging to capture their dynamics, such as transition processes. Therefore, we need a simulation method to predict the transition pathways of a protein in order to understand and study large functional deformations. Here, we present a new simulation method called normal mode-guided elastic network interpolation (NGENI) that performs normal modes analysis iteratively to predict transition pathways of proteins. To be more specific, NGENI obtains displacement vectors that determine intermediate structures by interpolating the distance between two end-point conformations, similar to a morphing method called elastic network interpolation. However, the displacement vector is regarded as a linear combination of the normal mode vectors of each intermediate structure, in order to enhance the physical sense of the proposed pathways. As a result, we can generate more reasonable transition pathways geometrically and thermodynamically. By using not only all normal modes, but also in part using only the lowest normal modes, NGENI can still generate reasonable pathways for large deformations in proteins. This study shows that global protein transitions are dominated by collective motion, which means that a few lowest normal modes play an important role in this process. NGENI has considerable merit in terms of computational cost because it is possible to generate transition pathways by partial degrees of freedom, while conventional methods are not capable of this. PMID:29020017
Earth elevation map production and high resolution sensing camera imaging analysis
NASA Astrophysics Data System (ADS)
Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai
2010-11-01
The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.
Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.
Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong
2017-11-01
Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.
Research progress and hotspot analysis of spatial interpolation
NASA Astrophysics Data System (ADS)
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
NASA Astrophysics Data System (ADS)
Han, Zhaohui; Friesen, Scott; Hacker, Fred; Zygmanski, Piotr
2018-01-01
Direct use of the total scatter factor (S tot) for independent monitor unit (MU) calculations can be a good alternative approach to the traditional separate treatment of head/collimator scatter (S c) and phantom scatter (S p), especially for stereotactic small fields under the simultaneous collimation of secondary jaws and tertiary multileaf collimators (MLC). We have carried out the measurement of S tot in water for field sizes down to 0.5 × 0.5 cm2 on a Varian TrueBeam STx medical linear accelerator (linac) equipped with high definition MLCs. Both the jaw field size (c) and MLC field size (s) significantly impact the linac output factors, especially when c \\gg s and s is small (e.g. s < 5 cm). The combined influence of MLC and jaws gives rise to a two-argument dependence of the total scatter factor, S tot(c,s), which is difficult to functionally decouple. The (c,s) dependence can be conceived as a set of s-dependent functions (‘branches’) defined on domain [s min, s max = c] for a given jaw size of c. We have also developed a heuristic model of S tot to assist the clinical implementation of the measured S tot data for small field dosimetry. The model has two components: (i) empirical fit formula for the s-dependent branches and (ii) interpolation scheme between the branches. The interpolation scheme preserves the characteristic shape of the measured branches and effectively transforms the measured trapezoidal domain in (c,s) plane to a rectangular domain to facilitate easier two-dimensional interpolation to determine S tot for arbitrary (c,s) combinations. Both the empirical fit and interpolation showed good agreement with experimental validation data.
NASA Astrophysics Data System (ADS)
Ouellette, G., Jr.; DeLong, K. L.
2016-02-01
High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.
Scene segmentation of natural images using texture measures and back-propagation
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Phatak, Anil; Chatterji, Gano
1993-01-01
Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
NASA Astrophysics Data System (ADS)
Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.
2013-12-01
There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)
An interative solution of an integral equation for radiative transfer by using variational technique
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.
1973-01-01
An effective iterative technique is introduced to solve a nonlinear integral equation frequently associated with radiative transfer problems. The problem is formulated in such a way that each step of an iterative sequence requires the solution of a linear integral equation. The advantage of a previously introduced variational technique which utilizes a stepwise constant trial function is exploited to cope with the nonlinear problem. The method is simple and straightforward. Rapid convergence is obtained by employing a linear interpolation of the iterative solutions. Using absorption coefficients of the Milne-Eddington type, which are applicable to some planetary atmospheric radiation problems. Solutions are found in terms of temperature and radiative flux. These solutions are presented numerically and show excellent agreement with other numerical solutions.
NASA Astrophysics Data System (ADS)
van Berkel, M.; Kobayashi, T.; Igami, H.; Vandersteen, G.; Hogeweij, G. M. D.; Tanaka, K.; Tamura, N.; Zwart, H. J.; Kubo, S.; Ito, S.; Tsuchiya, H.; de Baar, M. R.; LHD Experiment Group
2017-12-01
A new methodology to analyze non-linear components in perturbative transport experiments is introduced. The methodology has been experimentally validated in the Large Helical Device for the electron heat transport channel. Electron cyclotron resonance heating with different modulation frequencies by two gyrotrons has been used to directly quantify the amplitude of the non-linear component at the inter-modulation frequencies. The measurements show significant quadratic non-linear contributions and also the absence of cubic and higher order components. The non-linear component is analyzed using the Volterra series, which is the non-linear generalization of transfer functions. This allows us to study the radial distribution of the non-linearity of the plasma and to reconstruct linear profiles where the measurements were not distorted by non-linearities. The reconstructed linear profiles are significantly different from the measured profiles, demonstrating the significant impact that non-linearity can have.
Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation
Song, Genxin; Zhang, Jing; Wang, Ke
2014-01-01
In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129
Kierkegaard, Axel; Boij, Susann; Efraimsson, Gunilla
2010-02-01
Acoustic wave propagation in flow ducts is commonly modeled with time-domain non-linear Navier-Stokes equation methodologies. To reduce computational effort, investigations of a linearized approach in frequency domain are carried out. Calculations of sound wave propagation in a straight duct are presented with an orifice plate and a mean flow present. Results of transmission and reflections at the orifice are presented on a two-port scattering matrix form and are compared to measurements with good agreement. The wave propagation is modeled with a frequency domain linearized Navier-Stokes equation methodology. This methodology is found to be efficient for cases where the acoustic field does not alter the mean flow field, i.e., when whistling does not occur.
An alternative methodology for the analysis of electrical resistivity data from a soil gas study
NASA Astrophysics Data System (ADS)
Johansson, Sara; Rosqvist, Hâkan; Svensson, Mats; Dahlin, Torleif; Leroux, Virginie
2011-08-01
The aim of this paper is to present an alternative method for the analysis of resistivity data. The methodology was developed during a study to evaluate if electrical resistivity can be used as a tool for analysing subsurface gas dynamics and gas emissions from landfills. The main assumption of this study was that variations in time of resistivity data correspond to variations in the relative amount of gas and water in the soil pores. Field measurements of electrical resistivity, static chamber gas flux and weather data were collected at a landfill in Helsingborg, Sweden. The resistivity survey arrangement consisted of nine lines each with 21 electrodes in an investigation area of 16 ×20 m. The ABEM Lund Imaging System provided vertical and horizontal resistivity profiles every second hour. The data were inverted in Res3Dinv using L1-norm-based optimization method with a standard least-squares formulation. Each horizontal soil layer was then represented as a linear interpolated raster model. Different areas underneath the gas flux measurement points were defined in the resistivity model of the uppermost soil layer, and the vertical extension of the zones could be followed at greater depths in deeper layer models. The average resistivity values of the defined areas were calculated and plotted on a time axis, to provide graphs of the variation in resistivity with time in a specific section of the ground. Residual variation of resistivity was calculated by subtracting the resistivity variations caused by the diurnal temperature variations from the measured resistivity data. The resulting residual resistivity graphs were compared with field data of soil moisture, precipitation, soil temperature and methane flux. The results of the study were qualitative, but promising indications of relationships between electrical resistivity and variations in the relative amount of gas and water in the soil pores were found. Even though more research and better data quality is necessary for verification of the results presented here, we conclude that this alternative methodology of working with resistivity data seems to be a valuable and flexible tool for this application.
Monotonicity preserving splines using rational cubic Timmer interpolation
NASA Astrophysics Data System (ADS)
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G; Tyagi, N; Deasy, J
2015-06-15
Purpose: Cine 2DMRI is useful in MR-guided radiotherapy but it lacks volumetric information. We explore the feasibility of estimating timeresolved (TR) 4DMRI based on cine 2DMRI and respiratory-correlated (RC) 4DMRI though simulation. Methods: We hypothesize that a volumetric image during free breathing can be approximated by interpolation among 3DMRI image sets generated from a RC-4DMRI. Two patients’ RC-4DMRI with 4 or 5 phases were used to generate additional 3DMRI by interpolation. For each patient, six libraries were created to have total 5-to-35 3DMRI images by 0–6 equi-spaced tri-linear interpolation between adjacent and full-inhalation/full-exhalation phases. Sagittal cine 2DMRI were generated frommore » reference 3DMRIs created from separate, unique interpolations from the original RC-4DMRI. To test if accurate 3DMRI could be generated through rigid registration of the cine 2DMRI to the 3DMRI libraries, each sagittal 2DMRI was registered to sagittal cuts in the same location in the 3DMRI within each library to identify the two best matches: one with greater lung volume and one with smaller. A final interpolation between the corresponding 3DMRI was then performed to produce the first-order-approximation (FOA) 3DMRI. The quality and performance of the FOA as a function of library size was assessed using both the difference in lung volume and average voxel intensity between the FOA and the reference 3DMRI. Results: The discrepancy between the FOA and reference 3DMRI decreases as the library size increases. The 3D lung volume difference decreases from 5–15% to 1–2% as the library size increases from 5 to 35 image sets. The average difference in lung voxel intensity decreases from 7–8 to 5–6 with the lung intensity being 0–135. Conclusion: This study indicates that the quality of FOA 3DMRI improves with increasing 3DMRI library size. On-going investigations will test this approach using actual cine 2DMRI and introduce a higher order approximation for improvements. This study is in part supported by NIH (U54CA137788 and U54CA132378)« less
NASA Technical Reports Server (NTRS)
Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.
An Equation of State for Polymethylpentene (TPX) including Multi-Shock Response
NASA Astrophysics Data System (ADS)
Aslam, Tariq; Gustavsen, Richard; Sanchez, Nathaniel; Bartram, Brian
2011-06-01
The equation of state (EOS) of polymethylpentene (TPX) is examined through both single shock Hugoniot data as well as more recent multi-shock compression and release experiments. Results from the recent multi-shock experiments on LANL's 2-stage gas gun will be presented. A simple conservative Lagrangian numerical scheme utilizing total-variation-diminishing interpolation and an approximate Riemann solver will be presented as well as the methodology of calibration. It is shown that a simple Mie-Gruneisen EOS based off a Keane fitting form for the isentrope can replicate both the single shock and multi-shock experiments.
An equation of state for polymethylpentene (TPX) including multi-shock response
NASA Astrophysics Data System (ADS)
Aslam, Tariq D.; Gustavsen, Rick; Sanchez, Nathaniel; Bartram, Brian D.
2012-03-01
The equation of state (EOS) of polymethylpentene (TPX) is examined through both single shock Hugoniot data as well as more recent multi-shock compression and release experiments. Results from the recent multi-shock experiments on LANL's two-stage gas gun will be presented. A simple conservative Lagrangian numerical scheme utilizing total variation diminishing interpolation and an approximate Riemann solver will be presented as well as the methodology of calibration. It is shown that a simple Mie-Grüneisen EOS based on a Keane fitting form for the isentrope can replicate both the single shock and multi-shock experiments.
Creating Very True Quantum Algorithms for Quantum Energy Based Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc
2018-04-01
An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f( x) := s. x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = ( x 1, … , x N ), x j ∈ R and the coefficients s = ( s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.
Creating Very True Quantum Algorithms for Quantum Energy Based Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc
2017-12-01
An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f(x) := s.x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = (x 1, … , x N ), x j ∈ R and the coefficients s = (s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.
Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi
2010-09-01
The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.
Numerical solution of the Hele-Shaw equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, N.
1987-04-01
An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less
Quantifying Libya-4 Surface Reflectance Heterogeneity With WorldView-1, 2 and EO-1 Hyperion
NASA Technical Reports Server (NTRS)
Neigh, Christopher S. R.; McCorkel, Joel; Middleton, Elizabeth M.
2015-01-01
The land surface imaging (LSI) virtual constellation approach promotes the concept of increasing Earth observations from multiple but disparate satellites. We evaluated this through spectral and spatial domains, by comparing surface reflectance from 30-m Hyperion and 2-m resolution WorldView-2 (WV-2) data in the Libya-4 pseudoinvariant calibration site. We convolved and resampled Hyperion to WV-2 bands using both cubic convolution and nearest neighbor (NN) interpolation. Additionally, WV-2 and WV-1 same-date imagery were processed as a cross-track stereo pair to generate a digital terrain model to evaluate the effects from large (>70 m) linear dunes. Agreement was moderate to low on dune peaks between WV-2 and Hyperion (R2 <; 0.4) but higher in areas of lower elevation and slope (R2 > 0.6). Our results provide a satellite sensor intercomparison protocol for an LSI virtual constellation at high spatial resolution, which should start with geolocation of pixels, followed by NN interpolation to avoid tall dunes that enhance surface reflectance differences across this internationally utilized site.
Polynomial Expressions for Estimating Elastic Constants From the Resonance of Circular Plates
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Singh, Abhishek
2005-01-01
Two approaches were taken to make convenient spread sheet calculations of elastic constants from resonance data and the tables in ASTM C1259 and E1876: polynomials were fit to the tables; and an automated spread sheet interpolation routine was generated. To compare the approaches, the resonant frequencies of circular plates made of glass, hardened maraging steel, alpha silicon carbide, silicon nitride, tungsten carbide, tape cast NiO-YSZ, and zinc selenide were measured. The elastic constants, as calculated via the polynomials and linear interpolation of the tabular data in ASTM C1259 and E1876, were found comparable for engineering purposes, with the differences typically being less than 0.5 percent. Calculation of additional v values at t/R between 0 and 0.2 would allow better curve fits. This is not necessary for common engineering purposes, however, it might benefit the testing of emerging thin structures such as fuel cell electrolytes, gas conversion membranes, and coatings when Poisson s ratio is less than 0.15 and high precision is needed.
Effects of acceleration in the Gz axis on human cardiopulmonary responses to exercise.
Bonjour, Julien; Bringard, Aurélien; Antonutto, Guglielmo; Capelli, Carlo; Linnarsson, Dag; Pendergast, David R; Ferretti, Guido
2011-12-01
The aim of this paper was to develop a model from experimental data allowing a prediction of the cardiopulmonary responses to steady-state submaximal exercise in varying gravitational environments, with acceleration in the G(z) axis (a (g)) ranging from 0 to 3 g. To this aim, we combined data from three different experiments, carried out at Buffalo, at Stockholm and inside the Mir Station. Oxygen consumption, as expected, increased linearly with a (g). In contrast, heart rate increased non-linearly with a (g), whereas stroke volume decreased non-linearly: both were described by quadratic functions. Thus, the relationship between cardiac output and a (g) was described by a fourth power regression equation. Mean arterial pressure increased with a (g) non linearly, a relation that we interpolated again with a quadratic function. Thus, total peripheral resistance varied linearly with a (g). These data led to predict that maximal oxygen consumption would decrease drastically as a (g) is increased. Maximal oxygen consumption would become equal to resting oxygen consumption when a (g) is around 4.5 g, thus indicating the practical impossibility for humans to stay and work on the biggest Planets of the Solar System.
LIP: The Livermore Interpolation Package, Version 1.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-07-06
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
LIP: The Livermore Interpolation Package, Version 1.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-01-04
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
Dynamic Response of Multiphase Porous Media
1993-06-16
34"--OIct 5oct, tf1 2fOct, a f s,t,R Linearly Set Parameters Interpolate s = 1.03 from Model Fit s,t,R t = R = 0.0 Parameters Figure 3.3 Extrapolation...nitrogen. To expedite the testing, the system was equipped with solenoid operated valves so that the tests could be conducted by a single operator...incident bar. Figure 6.6 shows the incident bar entering the pressure vessel that contains the test specimen. The hose and valves are for filling and 6-5 I
1993-09-01
geometrical ceL.•er of the expressed as follows: control volume coupled with the use of linear interpolation for internodal variation usually leads to non ...2827) fuel using the such as copper, sulfur, and nitrogen. Note that F. is a experimental data in Figs. 1 and 2. It is assumed that non -depletins...combustor concepts One case, however, exhibited a very non -uniform is that proper control of fuel-air mixing is essential for distribution of fuel liquid
Comparisons between Nimbus 6 satellite and rawinsonde soundings for several geographical areas
NASA Technical Reports Server (NTRS)
Cheng, N. M.; Scoggins, J. R.
1981-01-01
Good agreement between satellite and weighted (linearly interpolated) rawinsonde temperature and temperature derived parameters was found in most instances with the poorest agreement either near the tropopause region or near the ground. However, satellite moisture data are highly questionable. The smallest discrepancy between satellite and weighted mean rawinsonde temperature and parameters derived from temperature was found over water and the largest discrepancy was found over mountains. Cumulative frequency distributions show that discrepancies between satellite and rawinsonde data can be represented by a normal distribution except for dew point temperature.
A graphics package for meteorological data, version 1.5
NASA Technical Reports Server (NTRS)
Moorthi, Shrinivas; Suarez, Max; Phillips, Bill; Schemm, Jae-Kyung; Schubert, Siegfried
1989-01-01
A plotting package has been developed to simplify the task of plotting meteorological data. The calling sequences and examples of high level yet flexible routines which allow contouring, vectors and shading of cylindrical, polar, orthographic and Mollweide (egg) projections are given. Routines are also included for contouring pressure-latitude and pressure-longitude fields with linear or log scales in pressure (interpolation to fixed grid interval is done automatically). Also included is a fairly general line plotting routine. The present version (1.5) produces plots on WMS laser printers and uses graphics primitives from WOLFPLOT.
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Novel view synthesis by interpolation over sparse examples
NASA Astrophysics Data System (ADS)
Liang, Bodong; Chung, Ronald C.
2006-01-01
Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.
TerraceM: A Matlab® tool to analyze marine terraces from high-resolution topography
NASA Astrophysics Data System (ADS)
Jara-Muñoz, Julius; Melnick, Daniel; Strecker, Manfred
2015-04-01
To date, Light detection and ranging (LiDAR), high- resolution topographic data sets enable remote identification of submeter-scale geomorphic features bringing valuable information of the landscape and geomorphic markers of tectonic deformation such as fault-scarp offsets, fluvial and marine terraces. Recent studies of marine terraces using LiDAR data have demonstrated that these landforms can be readily isolated from other landforms in the landscape, using slope and roughness parameters that allow for unambiguously mapping regional extents of terrace sequences. Marine terrace elevation has been used since decades as geodetic benchmarks of Quaternary deformation. Uplift rates may be estimated by locating the shoreline angle, a geomorphic feature correlated with the high-stand position of past sea levels. Indeed, precise identification of the shoreline-angle position is an important requirement to obtain reliable tectonic rates and coherent spatial correlation. To improve our ability to rapidly assess and map different shoreline angles at a regional scale we have developed the TerraceM application. TerraceM is a Matlab® tool that allows estimating the shoreline angle and its associated error using high-resolution topography. For convenience, TerraceM includes a graphical user interface (GUI) linked with Google Maps® API. The analysis starts by defining swath profiles from a shapefile created on a GIS platform orientated orthogonally to the terrace riser. TerraceM functions are included to extract and analyze the swath profiles. Two types of coastal landscapes may be analyzed using different methodologies: staircase sequences of multiple terraces and rough, rocky coasts. The former are measured by outlining the paleo-cliffs and paleo-platforms, whereas the latter are assessed by picking the elevation of sea-stack tops. By calculating the intersection between first-order interpolations of the maximum topography of swath profiles we define the shoreline angle in staircase terraces. For rocky coasts, the maximum stack peaks for a defined search ratio as well as a defined inflection point on the adjacent main cliff are interpolated to calculate the shoreline angle at the intersection with the cliff. Error estimates are based on the standard deviation of the linear regressions. The geomorphic age of terraces (Kt) can be also calculated by the linear diffusion equation (Hanks et al., 1989), with a best-fitting model found by minimizing the RMS. TerraceM has the ability to efficiently process several profiles in batch-mode run. Results may be exported in various formats, including Google Earth and ArcGis, basic statistics are automatically computed. Test runs have been made at Santa Cruz, California, using various topographic data sets and comparing results with published field measurements (Anderson and Menking, 1994). Repeatability was evaluated using multiple test runs made by students in a classroom setting.
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
3-d interpolation in object perception: evidence from an objective performance paradigm.
Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana
2005-06-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).
The Linear Bicharacteristic Scheme for Computational Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Chan, Siew-Loong
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.
A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.
Spectral Invariant Behavior of Zenith Radiance Around Cloud Edges Observed by ARM SWS
NASA Technical Reports Server (NTRS)
Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.
2009-01-01
The ARM Shortwave Spectrometer (SWS) measures zenith radiance at 418 wavelengths between 350 and 2170 nm. Because of its 1-sec sampling resolution, the SWS provides a unique capability to study the transition zone between cloudy and clear sky areas. A spectral invariant behavior is found between ratios of zenith radiance spectra during the transition from cloudy to cloud-free. This behavior suggests that the spectral signature of the transition zone is a linear mixture between the two extremes (definitely cloudy and definitely clear). The weighting function of the linear mixture is a wavelength-independent characteristic of the transition zone. It is shown that the transition zone spectrum is fully determined by this function and zenith radiance spectra of clear and cloudy regions. An important result of these discoveries is that high temporal resolution radiance measurements in the clear-to-cloud transition zone can be well approximated by lower temporal resolution measurements plus linear interpolation.
Estimation of available global solar radiation using sunshine duration over South Korea
NASA Astrophysics Data System (ADS)
Das, Amrita; Park, Jin-ki; Park, Jong-hwa
2015-11-01
Besides designing a solar energy system, accurate insolation data is also a key component for many biological and atmospheric studies. But solar radiation stations are not widely available due to financial and technical limitations; this insufficient number affects the spatial resolution whenever an attempt is made to construct a solar radiation map. There are several models in literature for estimating incoming solar radiation using sunshine fraction. Seventeen of such models among which 6 are linear and 11 non-linear, have been chosen for studying and estimating solar radiation on a horizontal surface over South Korea. The better performance of a non-linear model signifies the fact that the relationship between sunshine duration and clearness index does not follow a straight line. With such a model solar radiation over 79 stations measuring sunshine duration is computed and used as input for spatial interpolation. Finally monthly solar radiation maps are constructed using the Ordinary Kriging method. The cross validation results show good agreement between observed and predicted data.
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Karaguelle, H.; Lee, S. S.; Williams, J., Jr.
1984-01-01
The input/output characteristics of an ultrasonic testing system used for stress wave factor measurements were studied. The fundamentals of digital signal processing are summarized. The inputs and outputs are digitized and processed in a microcomputer using digital signal processing techniques. The entire ultrasonic test system, including transducers and all electronic components, is modeled as a discrete-time linear shift-invariant system. Then the impulse response and frequency response of the continuous time ultrasonic test system are estimated by interpolating the defining points in the unit sample response and frequency response of the discrete time system. It is found that the ultrasonic test system behaves as a linear phase bandpass filter. Good results were obtained for rectangular pulse inputs of various amplitudes and durations and for tone burst inputs whose center frequencies are within the passband of the test system and for single cycle inputs of various amplitudes. The input/output limits on the linearity of the system are determined.
NASA Technical Reports Server (NTRS)
Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert
2017-01-01
Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.
A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Grunberg, D. B.
1986-01-01
A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.
Calibration of DEM parameters on shear test experiments using Kriging method
NASA Astrophysics Data System (ADS)
Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier
2017-06-01
Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Use of loading-unloading compression curves in medical device design
NASA Astrophysics Data System (ADS)
Ciornei, M. C.; Alaci, S.; Ciornei, F. C.; Romanu, I. C.
2017-08-01
The paper presents a method and experimental results regarding mechanical testing of soft materials. In order to characterize the mechanical behaviour of technological materials used in prosthesis, a large number of material constants are required, as well as the comparison to the original. The present paper proposes as methodology the comparison between compression loading-unloading curves corresponding to a soft biological tissue and to a synthetic material. To this purpose, a device was designed based on the principle of the dynamic harness test. A moving load is considered and the force upon the indenter is controlled for loading-unloading phases. The load and specimen deformation are simultaneously recorded. A significant contribution of this paper is the interpolation of experimental data by power law functions, a difficult task because of the instability of the system of equations to be optimized. Finding the interpolation function was simplified, from solving a system of transcendental equations to solving a unique equation. The characteristic parameters of the experimentally curves must be compared to the ones corresponding to actual tissue. The tests were performed for two cases: first, using a spherical punch, and second, for a flat-ended cylindrical punch.
Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago
2018-07-12
The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Goddijn-Murphy, L. M.; Woolf, D. K.; Land, P. E.; Shutler, J. D.; Donlon, C.
2015-07-01
Climatologies, or long-term averages, of essential climate variables are useful for evaluating models and providing a baseline for studying anomalies. The Surface Ocean CO2 Atlas (SOCAT) has made millions of global underway sea surface measurements of CO2 publicly available, all in a uniform format and presented as fugacity, fCO2. As fCO2 is highly sensitive to temperature, the measurements are only valid for the instantaneous sea surface temperature (SST) that is measured concurrently with the in-water CO2 measurement. To create a climatology of fCO2 data suitable for calculating air-sea CO2 fluxes, it is therefore desirable to calculate fCO2 valid for a more consistent and averaged SST. This paper presents the OceanFlux Greenhouse Gases methodology for creating such a climatology. We recomputed SOCAT's fCO2 values for their respective measurement month and year using monthly composite SST data on a 1° × 1° grid from satellite Earth observation and then extrapolated the resulting fCO2 values to reference year 2010. The data were then spatially interpolated onto a 1° × 1° grid of the global oceans to produce 12 monthly fCO2 distributions for 2010, including the prediction errors of fCO2 produced by the spatial interpolation technique. The partial pressure of CO2 (pCO2) is also provided for those who prefer to use pCO2. The CO2 concentration difference between ocean and atmosphere is the thermodynamic driving force of the air-sea CO2 flux, and hence the presented fCO2 distributions can be used in air-sea gas flux calculations together with climatologies of other climate variables.
Husser, Edgar; Bargmann, Swantje
2017-01-01
The mechanical behavior of single crystalline, micro-sized copper is investigated in the context of cantilever beam bending experiments. Particular focus is on the role of geometrically necessary dislocations (GNDs) during bending-dominated load conditions and their impact on the characteristic bending size effect. Three different sample sizes are considered in this work with main variation in thickness. A gradient extended crystal plasticity model is presented and applied in a three-dimensional finite-element (FE) framework considering slip system-based edge and screw components of the dislocation density vector. The underlying mathematical model contains non-standard evolution equations for GNDs, crystal-specific interaction relations, and higher-order boundary conditions. Moreover, two element formulations are examined and compared with respect to size-independent as well as size-dependent bending behavior. The first formulation is based on a linear interpolation of the displacement and the GND density field together with a full integration scheme whereas the second is based on a mixed interpolation scheme. While the GND density fields are treated equivalently, the displacement field is interpolated quadratically in combination with a reduced integration scheme. Computational results indicate that GND storage in small cantilever beams strongly influences the evolution of statistically stored dislocations (SSDs) and, hence, the distribution of the total dislocation density. As a particular example, the mechanical bending behavior in the case of a physically motivated limitation of GND storage is studied. The resulting impact on the mechanical bending response as well as on the predicted size effect is analyzed. Obtained results are discussed and related to experimental findings from the literature. PMID:28772657
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
Probabilistic surface reconstruction from multiple data sets: An example for the Australian Moho
NASA Astrophysics Data System (ADS)
Bodin, T.; Salmon, M.; Kennett, B. L. N.; Sambridge, M.
2012-10-01
Interpolation of spatial data is a widely used technique across the Earth sciences. For example, the thickness of the crust can be estimated by different active and passive seismic source surveys, and seismologists reconstruct the topography of the Moho by interpolating these different estimates. Although much research has been done on improving the quantity and quality of observations, the interpolation algorithms utilized often remain standard linear regression schemes, with three main weaknesses: (1) the level of structure in the surface, or smoothness, has to be predefined by the user; (2) different classes of measurements with varying and often poorly constrained uncertainties are used together, and hence it is difficult to give appropriate weight to different data types with standard algorithms; (3) there is typically no simple way to propagate uncertainties in the data to uncertainty in the estimated surface. Hence the situation can be expressed by Mackenzie (2004): "We use fantastic telescopes, the best physical models, and the best computers. The weak link in this chain is interpreting our data using 100 year old mathematics". Here we use recent developments made in Bayesian statistics and apply them to the problem of surface reconstruction. We show how the reversible jump Markov chain Monte Carlo (rj-McMC) algorithm can be used to let the degree of structure in the surface be directly determined by the data. The solution is described in probabilistic terms, allowing uncertainties to be fully accounted for. The method is illustrated with an application to Moho depth reconstruction in Australia.
TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION
NASA Technical Reports Server (NTRS)
Smith, R. E.
1994-01-01
TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.
Fast and low-cost method for VBES bathymetry generation in coastal areas
NASA Astrophysics Data System (ADS)
Sánchez-Carnero, N.; Aceña, S.; Rodríguez-Pérez, D.; Couñago, E.; Fraile, P.; Freire, J.
2012-12-01
Sea floor topography is key information in coastal area management. Nowadays, LiDAR and multibeam technologies provide accurate bathymetries in those areas; however these methodologies are yet too expensive for small customers (fishermen associations, small research groups) willing to keep a periodic surveillance of environmental resources. In this paper, we analyse a simple methodology for vertical beam echosounder (VBES) bathymetric data acquisition and postprocessing, using low-cost means and free customizable tools such as ECOSONS and gvSIG (that is compared with industry standard ArcGIS). Echosounder data was filtered, resampled and, interpolated (using kriging or radial basis functions). Moreover, the presented methodology includes two data correction processes: Monte Carlo simulation, used to reduce GPS errors, and manually applied bathymetric line transformations, both improving the obtained results. As an example, we present the bathymetry of the Ría de Cedeira (Galicia, NW Spain), a good testbed area for coastal bathymetry methodologies given its extension and rich topography. The statistical analysis, performed by direct ground-truthing, rendered an upper bound of 1.7 m error, at 95% confidence level, and 0.7 m r.m.s. (cross-validation provided 30 cm and 25 cm, respectively). The methodology presented is fast and easy to implement, accurate outside transects (accuracy can be estimated), and can be used as a low-cost periodical monitoring method.
Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P
2016-03-01
Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.
NASA Astrophysics Data System (ADS)
Brümmer, C.; Moffat, A. M.; Huth, V.; Augustin, J.; Herbst, M.; Kutsch, W. L.
2016-12-01
Manual carbon dioxide flux measurements with closed chambers at scheduled campaigns are a versatile method to study management effects at small scales in multiple-plot experiments. The eddy covariance technique has the advantage of quasi-continuous measurements but requires large homogeneous areas of a few hectares. To evaluate the uncertainties associated with interpolating from individual campaigns to the whole vegetation period, we installed both techniques at an agricultural site in Northern Germany. The presented comparison covers two cropping seasons, winter oilseed rape in 2012/13 and winter wheat in 2013/14. Modeling half-hourly carbon fluxes from campaigns is commonly performed based on non-linear regressions for the light response and respiration. The daily averages of net CO2 modeled from chamber data deviated from eddy covariance measurements in the range of ± 5 g C m-2 day-1. To understand the observed differences and to disentangle the effects, we performed four additional setups (expert versus default settings of the non-linear regressions based algorithm, purely empirical modeling with artificial neural networks versus non-linear regressions, cross-validating using eddy covariance measurements as campaign fluxes, weekly versus monthly scheduling of campaigns) to model the half-hourly carbon fluxes for the whole vegetation period. The good agreement of the seasonal course of net CO2 at plot and field scale for our agricultural site demonstrates that both techniques are robust and yield consistent results at seasonal time scale even for a managed ecosystem with high temporal dynamics in the fluxes. This allows combining the respective advantages of factorial experiments at plot scale with dense time series data at field scale. Furthermore, the information from the quasi-continuous eddy covariance measurements can be used to derive vegetation proxies to support the interpolation of carbon fluxes in-between the manual chamber campaigns.
MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system
Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron
2011-01-01
We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies
ERIC Educational Resources Information Center
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-01-01
Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
NASA Astrophysics Data System (ADS)
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Besser, J M; Wang, N; Dwyer, F J; Mayer, F L; Ingersoll, C G
2005-02-01
Early life-stage toxicity tests with copper and pentachlorophenol (PCP) were conducted with two species listed under the United States Endangered Species Act (the endangered fountain darter, Etheostoma fonticola, and the threatened spotfin chub, Cyprinella monacha) and two commonly tested species (fathead minnow, Pimephales promelas, and rainbow trout, Oncorhynchus mykiss). Results were compared using lowest-observed effect concentrations (LOECs) based on statistical hypothesis tests and by point estimates derived by linear interpolation and logistic regression. Sublethal end points, growth (mean individual dry weight) and biomass (total dry weight per replicate) were usually more sensitive than survival. The biomass end point was equally sensitive as growth and had less among-test variation. Effect concentrations based on linear interpolation were less variable than LOECs, which corresponded to effects ranging from 9% to 76% relative to controls and were consistent with thresholds based on logistic regression. Fountain darter was the most sensitive species for both chemicals tested, with effect concentrations for biomass at < or = 11 microg/L (LOEC and 25% inhibition concentration [IC25]) for copper and at 21 microg/L (IC25) for PCP, but spotfin chub was no more sensitive than the commonly tested species. Effect concentrations for fountain darter were lower than current chronic water quality criteria for both copper and PCP. Protectiveness of chronic water-quality criteria for threatened and endangered species could be improved by the use of safety factors or by conducting additional chronic toxicity tests with species and chemicals of concern.
Besser, J.M.; Wang, N.; Dwyer, F.J.; Mayer, F.L.; Ingersoll, C.G.
2005-01-01
Early life-stage toxicity tests with copper and pentachlorophenol (PCP) were conducted with two species listed under the United States Endangered Species Act (the endangered fountain darter, Etheostoma fonticola, and the threatened spotfin chub, Cyprinella monacha) and two commonly tested species (fathead minnow, Pimephales promelas, and rainbow trout, Oncorhynchus mykiss). Results were compared using lowest-observed effect concentrations (LOECs) based on statistical hypothesis tests and by point estimates derived by linear interpolation and logistic regression. Sublethal end points, growth (mean individual dry weight) and biomass (total dry weight per replicate) were usually more sensitive than survival. The biomass end point was equally sensitive as growth and had less among-test variation. Effect concentrations based on linear interpolation were less variable than LOECs, which corresponded to effects ranging from 9% to 76% relative to controls and were consistent with thresholds based on logistic regression. Fountain darter was the most sensitive species for both chemicals tested, with effect concentrations for biomass at ??? 11 ??g/L (LOEC and 25% inhibition concentration [IC25]) for copper and at 21 ??g/L (IC25) for PCP, but spotfin chub was no more sensitive than the commonly tested species. Effect concentrations for fountain darter were lower than current chronic water quality criteria for both copper and PCP. Protectiveness of chronic water-quality criteria for threatened and endangered species could be improved by the use of safety factors or by conducting additional chronic toxicity tests with species and chemicals of concern. ?? 2005 Springer Science+Business Media, Inc.
3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm
ERIC Educational Resources Information Center
Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana
2005-01-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Borak, Jordan S.
2008-01-01
Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.
Discrete Fourier transforms of nonuniformly spaced data
NASA Technical Reports Server (NTRS)
Swan, P. R.
1982-01-01
Time series or spatial series of measurements taken with nonuniform spacings have failed to yield fully to analysis using the Discrete Fourier Transform (DFT). This is due to the fact that the formal DFT is the convolution of the transform of the signal with the transform of the nonuniform spacings. Two original methods are presented for deconvolving such transforms for signals containing significant noise. The first method solves a set of linear equations relating the observed data to values defined at uniform grid points, and then obtains the desired transform as the DFT of the uniform interpolates. The second method solves a set of linear equations relating the real and imaginary components of the formal DFT directly to those of the desired transform. The results of numerical experiments with noisy data are presented in order to demonstrate the capabilities and limitations of the methods.
A Flight Control System for Small Unmanned Aerial Vehicle
NASA Astrophysics Data System (ADS)
Tunik, A. A.; Nadsadnaya, O. I.
2018-03-01
The program adaptation of the controller for the flight control system (FCS) of an unmanned aerial vehicle (UAV) is considered. Linearized flight dynamic models depend mainly on the true airspeed of the UAV, which is measured by the onboard air data system. This enables its use for program adaptation of the FCS over the full range of altitudes and velocities, which define the flight operating range. FCS with program adaptation, based on static feedback (SF), is selected. The SF parameters for every sub-range of the true airspeed are determined using the linear matrix inequality approach in the case of discrete systems for synthesis of a suboptimal robust H ∞-controller. The use of the Lagrange interpolation between true airspeed sub-ranges provides continuous adaptation. The efficiency of the proposed approach is shown against an example of the heading stabilization system.
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
NASA Astrophysics Data System (ADS)
Le Page, Michel; Gosset, Cindy; Oueslati, Ines; Calvez, Roger; Zribi, Mehrez; Lilli Chabaane, Zohra
2015-04-01
Meteorological forcing is essential to hydrological and hydro-geological modeling. In the case of the semi-arid catchment of Merguellil in Tunisia, long term time series are only available in the plain for a SYNOP station. Other meteorological stations have been installed since 2010. Therefore, this study aims at qualifying the reliability of the meteorological forcing necessary for an integrated model conception. We compare the meteorological data from 7 stations (sources: WMO and our own station), inside and around the Merguellil catchment, with daily gridded data at 25*25 km from AGRI4CAST and 50*50km from WFDEI. AGRI4CAST (Biaveti et al, 2008) is an interpolated dataset based on actual weather stations produced by the Joint Research Centre (JRC) for the Monitoring Agricultural Resources Unit (MARS). The WFDEI second version dataset (Weedon et al, 2014) has been generated using the same methodology as the widely used WATCH Forcing Data (WFD) by making use of the ERA-Interim reanalysis data. The studied meteorological variables are Rs, Tmoy, U2, P, RH and ET0, with the scores RMSE, bias and R pearson. Regarding the AGRI4CAST dataset, the scores are established over different periods according to variables based on stepping between the observed and interpolated data. The scores show good correlations between the observed temperatures, but with a spatial variability bound to the stations elevations. The moderate and interpolated radiations also show a good concordance indicating a good reliability. The R pearson score obtained for the values of relative humidity show a good correlation between the observations and the interpolations, however, the short periods of comparisons do not allow obtaining significant information and the RMSE and bias are important. Wind speed has an important negative bias for a majority of stations (positively for only one). Only one station shows concordances between the data. The study of the data indicates that we shall have to adjust the wind speeds and the relative humidity of the air for the implementation of a model. Finally the reference evapotranspiration seems relatively coherent, in spite of the dispersal observed during the meteorological measures, but with biases rather high and RMSE also rather high (> 1.3 mm). After revised the parameter U2 and RH, AGRI4CAST can possibly be corrected by ancillary ground stations. The analysis of the WFDEI dataset is currently under evaluation. (1) Biavetti, I., Karetsos, S., Ceglar, A., Toreti, A., Panagos P. (2014), European meteorological data: contribution to research, development and policy support, Proc. of SPIE Vol. 9229 922907-1 (2) Weedon, G. P., G. Balsamo, N. Bellouin, S. Gomes, M. J. Best, and P. Viterbo (2014), The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data, Water Resour. Res., 50, 7505-7514, doi:10.1002/ 2014WR015638.
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146
Soft tissue modelling through autowaves for surgery simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian
2006-09-01
Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.
Hermite-Birkhoff interpolation in the nth roots of unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.
1980-06-01
Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Least-squares luma-chroma demultiplexing algorithm for Bayer demosaicking.
Leung, Brian; Jeon, Gwanggil; Dubois, Eric
2011-07-01
This paper addresses the problem of interpolating missing color components at the output of a Bayer color filter array (CFA), a process known as demosaicking. A luma-chroma demultiplexing algorithm is presented in detail, using a least-squares design methodology for the required bandpass filters. A systematic study of objective demosaicking performance and system complexity is carried out, and several system configurations are recommended. The method is compared with other benchmark algorithms in terms of CPSNR and S-CIELAB ∆E∗ objective quality measures and demosaicking speed. It was found to provide excellent performance and the best quality-speed tradeoff among the methods studied.
Alvermann, A; Fehske, H
2009-04-17
We propose a general numerical approach to open quantum systems with a coupling to bath degrees of freedom. The technique combines the methodology of polynomial expansions of spectral functions with the sparse grid concept from interpolation theory. Thereby we construct a Hilbert space of moderate dimension to represent the bath degrees of freedom, which allows us to perform highly accurate and efficient calculations of static, spectral, and dynamic quantities using standard exact diagonalization algorithms. The strength of the approach is demonstrated for the phase transition, critical behavior, and dissipative spin dynamics in the spin-boson model.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
An efficient interpolation filter VLSI architecture for HEVC standard
NASA Astrophysics Data System (ADS)
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
A rational interpolation method to compute frequency response
NASA Technical Reports Server (NTRS)
Kenney, Charles; Stubberud, Stephen; Laub, Alan J.
1993-01-01
A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.
Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media
2015-09-24
TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Minimal norm constrained interpolation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Irvine, L. D.
1985-01-01
In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.
Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.
Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu
2016-08-01
The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.
Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.
Zhang, Xiangjun; Wu, Xiaolin
2008-06-01
The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.
Color quality management in advanced flat panel display engines
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Neugebauer, Charles F.; Marnatti, David M.
2003-01-01
During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.
Stokes image reconstruction for two-color microgrid polarization imaging systems.
Lemaster, Daniel A
2011-07-18
The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.
Application of artificial neural networks in nonlinear analysis of trusses
NASA Technical Reports Server (NTRS)
Alam, J.; Berke, L.
1991-01-01
A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.
Turbulence Time Series Data Hole Filling using Karhunen-Loeve and ARIMA methods
2007-01-01
memory is represented by higher values of d. 4.1. ARIMA and EMD We applied an ARIMA (0,d,0) model to predict the behaviour of the final section of the...to a simplified ARIMA (0,d,0) model , which performed better than the linear interpolant but less effectively than the KL algorithm, disregarding edge...ar X iv :p hy si cs /0 70 12 38 v1 22 J an 2 00 7 Turbulence Time Series Data Hole Filling using Karhunen-Loève and ARIMA methods M P J L
Large-Nc masses of light mesons from QCD sum rules for nonlinear radial Regge trajectories
NASA Astrophysics Data System (ADS)
Afonin, S. S.; Solomko, T. D.
2018-04-01
The large-Nc masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago for consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. We revised that original analysis and found the second solution for the proposed sum rules. The given solution describes better the spectrum of vector and axial mesons.
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
Comparing interpolation techniques for annual temperature mapping across Xinjiang region
NASA Astrophysics Data System (ADS)
Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang
2016-11-01
Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.
On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians
NASA Astrophysics Data System (ADS)
Valverde, Clodoaldo; Baseia, Basílio
2018-01-01
We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
Computer Program for Point Location And Calculation of ERror (PLACER)
Granato, Gregory E.
1999-01-01
A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.
NASA Astrophysics Data System (ADS)
Li, Xing-Wang; Bai, Chao-Ying; Yue, Xiao-Peng; Greenhalgh, Stewart
2018-02-01
To overcome a major problem in current ray tracing methods, which are only capable of tracing first arrivals, and occasionally primary reflections (or mode conversions) in regular cell models, we extend in this paper the multistage triangular shortest-path method (SPM) into 3D titled transversely isotropic (TTI) anisotropic media. The proposed method is capable of tracking multi-phase arrivals composed of any kind of combinations of transmissions, mode conversions and reflections. In model parameterization, five elastic parameters, plus two angles defining the titled axis of symmetry of TTI media are sampled at the primary nodes of the tetrahedral cell, and velocity value at secondary node positions are linked by a tri-linear velocity interpolation function to the primary node velocity value of that of a tetrahedral cell, from which the group velocities of the three wave modes (qP, qSV and qSH) are computed. The multistage triangular SPM is used to track multi-phase arrivals. The uniform anisotropic test indicates that the numerical solution agrees well with the analytic solution, thus verifying the accuracy of the methodology. Several simulations and comparison results for heterogeneous models show that the proposed algorithm is able to efficiently and accurately approximate undulating surface topography and irregular subsurface velocity discontinuities. It is suitable for any combination of multi-phase arrival tracking in arbitrary tilt angle TTI media and can accommodate any magnitude of anisotropy.
NASA Astrophysics Data System (ADS)
Evans, Aaron H.
Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, T; Koo, T
Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
Gerdel, Katharina; Spielmann, Felix Maximilian; Hammerle, Albin; Wohlfahrt, Georg
2017-01-01
The trace gas carbonyl sulphide (COS) has lately received growing interest in the eddy covariance (EC) community due to its potential to serve as an independent approach for constraining gross primary production and canopy stomatal conductance. Thanks to recent developments of fast-response high-precision trace gas analysers (e.g. quantum cascade laser absorption spectrometers (QCLAS)), a handful of EC COS flux measurements have been published since 2013. To date, however, a thorough methodological characterisation of QCLAS with regard to the requirements of the EC technique and the necessary processing steps has not been conducted. The objective of this study is to present a detailed characterization of the COS measurement with the Aerodyne QCLAS in the context of the EC technique, and to recommend best EC processing practices for those measurements. Data were collected from May to October 2015 at a temperate mountain grassland in Tyrol, Austria. Analysis of the Allan variance of high-frequency concentration measurements revealed sensor drift to occur under field conditions after an averaging time of around 50 s. We thus explored the use of two high-pass filtering approaches (linear detrending and recursive filtering) as opposed to block averaging and linear interpolation of regular background measurements for covariance computation. Experimental low-pass filtering correction factors were derived from a detailed cospectral analysis. The CO2 and H2O flux measurements obtained with the QCLAS were compared against those obtained with a closed-path infrared gas analyser. Overall, our results suggest small, but systematic differences between the various high-pass filtering scenarios with regard to the fraction of data retained in the quality control and flux magnitudes. When COS and CO2 fluxes are combined in the so-called ecosystem relative uptake rate, systematic differences between the high-pass filtering scenarios largely cancel out, suggesting that this relative metric represents a robust key parameter comparable between studies relying on different post-processing schemes. PMID:29093762
Gerdel, Katharina; Spielmann, Felix Maximilian; Hammerle, Albin; Wohlfahrt, Georg
2017-09-26
The trace gas carbonyl sulphide (COS) has lately received growing interest in the eddy covariance (EC) community due to its potential to serve as an independent approach for constraining gross primary production and canopy stomatal conductance. Thanks to recent developments of fast-response high-precision trace gas analysers (e.g. quantum cascade laser absorption spectrometers (QCLAS)), a handful of EC COS flux measurements have been published since 2013. To date, however, a thorough methodological characterisation of QCLAS with regard to the requirements of the EC technique and the necessary processing steps has not been conducted. The objective of this study is to present a detailed characterization of the COS measurement with the Aerodyne QCLAS in the context of the EC technique, and to recommend best EC processing practices for those measurements. Data were collected from May to October 2015 at a temperate mountain grassland in Tyrol, Austria. Analysis of the Allan variance of high-frequency concentration measurements revealed sensor drift to occur under field conditions after an averaging time of around 50 s. We thus explored the use of two high-pass filtering approaches (linear detrending and recursive filtering) as opposed to block averaging and linear interpolation of regular background measurements for covariance computation. Experimental low-pass filtering correction factors were derived from a detailed cospectral analysis. The CO 2 and H 2 O flux measurements obtained with the QCLAS were compared against those obtained with a closed-path infrared gas analyser. Overall, our results suggest small, but systematic differences between the various high-pass filtering scenarios with regard to the fraction of data retained in the quality control and flux magnitudes. When COS and CO 2 fluxes are combined in the so-called ecosystem relative uptake rate, systematic differences between the high-pass filtering scenarios largely cancel out, suggesting that this relative metric represents a robust key parameter comparable between studies relying on different post-processing schemes.
New calibration technique for KCD-based megavoltage imaging
NASA Astrophysics Data System (ADS)
Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.
1999-05-01
In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The application of this protocol to a KCD system under construction is expected shortly.
NASA Astrophysics Data System (ADS)
Gerdel, Katharina; Spielmann, Felix Maximilian; Hammerle, Albin; Wohlfahrt, Georg
2017-09-01
The trace gas carbonyl sulfide (COS) has lately received growing interest from the eddy covariance (EC) community due to its potential to serve as an independent approach for constraining gross primary production and canopy stomatal conductance. Thanks to recent developments of fast-response high-precision trace gas analysers (e.g. quantum cascade laser absorption spectrometers, QCLAS), a handful of EC COS flux measurements have been published since 2013. To date, however, a thorough methodological characterisation of QCLAS with regard to the requirements of the EC technique and the necessary processing steps has not been conducted. The objective of this study is to present a detailed characterisation of the COS measurement with the Aerodyne QCLAS in the context of the EC technique and to recommend best EC processing practices for those measurements. Data were collected from May to October 2015 at a temperate mountain grassland in Tyrol, Austria. Analysis of the Allan variance of high-frequency concentration measurements revealed the occurrence of sensor drift under field conditions after an averaging time of around 50 s. We thus explored the use of two high-pass filtering approaches (linear detrending and recursive filtering) as opposed to block averaging and linear interpolation of regular background measurements for covariance computation. Experimental low-pass filtering correction factors were derived from a detailed cospectral analysis. The CO2 and H2O flux measurements obtained with the QCLAS were compared with those obtained with a closed-path infrared gas analyser. Overall, our results suggest small, but systematic differences between the various high-pass filtering scenarios with regard to the fraction of data retained in the quality control and flux magnitudes. When COS and CO2 fluxes are combined in the ecosystem relative uptake rate, systematic differences between the high-pass filtering scenarios largely cancel out, suggesting that this relative metric represents a robust key parameter comparable between studies relying on different post-processing schemes.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Contrast-guided image interpolation.
Wei, Zhe; Ma, Kai-Kuang
2013-11-01
In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.
Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment
NASA Astrophysics Data System (ADS)
Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.
2007-05-01
A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.