Sample records for interpolant response surface

  1. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  2. Data-Driven Haptic Modeling and Rendering of Viscoelastic and Frictional Responses of Deformable Objects.

    PubMed

    Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon

    2016-01-01

    In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.

  3. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  4. Estimation of water surface elevations for the Everglades, Florida

    USGS Publications Warehouse

    Palaseanu, Monica; Pearlstine, Leonard

    2008-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring gages and modeling methods that provides scientists and managers with current (2000–present) online water surface and water depth information for the freshwater domain of the Greater Everglades. This integrated system presents data on a 400-m square grid to assist in (1) large-scale field operations; (2) integration of hydrologic and ecologic responses; (3) supporting biological and ecological assessment of the implementation of the Comprehensive Everglades Restoration Plan (CERP); and (4) assessing trophic-level responses to hydrodynamic changes in the Everglades.This paper investigates the radial basis function multiquadric method of interpolation to obtain a continuous freshwater surface across the entire Everglades using radio-transmitted data from a network of water-level gages managed by the US Geological Survey (USGS), the South Florida Water Management District (SFWMD), and the Everglades National Park (ENP). Since the hydrological connection is interrupted by canals and levees across the study area, boundary conditions were simulated by linearly interpolating along those features and integrating the results together with the data from marsh stations to obtain a continuous water surface through multiquadric interpolation. The absolute cross-validation errors greater than 5 cm correlate well with the local outliers and the minimum distance between the closest stations within 2000-m radius, but seem to be independent of vegetation or season.

  5. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  6. Constructing polyatomic potential energy surfaces by interpolating diabatic Hamiltonian matrices with demonstration on green fluorescent protein chromophore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jae Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr; Department of Chemistry, Pohang University of Science and Technology

    2014-04-28

    Simulating molecular dynamics directly on quantum chemically obtained potential energy surfaces is generally time consuming. The cost becomes overwhelming especially when excited state dynamics is aimed with multiple electronic states. The interpolated potential has been suggested as a remedy for the cost issue in various simulation settings ranging from fast gas phase reactions of small molecules to relatively slow condensed phase dynamics with complex surrounding. Here, we present a scheme for interpolating multiple electronic surfaces of a relatively large molecule, with an intention of applying it to studying nonadiabatic behaviors. The scheme starts with adiabatic potential information and its diabaticmore » transformation, both of which can be readily obtained, in principle, with quantum chemical calculations. The adiabatic energies and their derivatives on each interpolation center are combined with the derivative coupling vectors to generate the corresponding diabatic Hamiltonian and its derivatives, and they are subsequently adopted in producing a globally defined diabatic Hamiltonian function. As a demonstration, we employ the scheme to build an interpolated Hamiltonian of a relatively large chromophore, para-hydroxybenzylidene imidazolinone, in reference to its all-atom analytical surface model. We show that the interpolation is indeed reliable enough to reproduce important features of the reference surface model, such as its adiabatic energies and derivative couplings. In addition, nonadiabatic surface hopping simulations with interpolation yield population transfer dynamics that is well in accord with the result generated with the reference analytic surface. With these, we conclude by suggesting that the interpolation of diabatic Hamiltonians will be applicable for studying nonadiabatic behaviors of sizeable molecules.« less

  7. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  8. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  9. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  10. A bivariate rational interpolation with a bi-quadratic denominator

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu

    2006-10-01

    In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.

  11. Spatially continuous interpolation of water stage and water depths using the Everglades depth estimation network (EDEN)

    USGS Publications Warehouse

    Pearlstine, Leonard; Higer, Aaron; Palaseanu, Monica; Fujisaki, Ikuko; Mazzotti, Frank

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on a 400-square-meter grid spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades.

  12. NTS radiological assessment project: comparison of delta-surface interpolation with kriging for the Frenchman Lake region of area 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, T.A. Jr.

    The primary objective of this report is to compare the results of delta surface interpolation with kriging on four large sets of radiological data sampled in the Frenchman Lake region at the Nevada Test Site. The results of kriging, described in Barnes, Giacomini, Reiman, and Elliott, are very similar to those using the delta surface interpolant. The other topic studied is in reducing the number of sample points and obtaining results similar to those using all of the data. The positive results here suggest that great savings of time and money can be made. Furthermore, the delta surface interpolant ismore » viewed as a contour map and as a three dimensional surface. These graphical representations help in the analysis of the large sets of radiological data.« less

  13. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    USGS Publications Warehouse

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.

  14. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  15. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  16. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  17. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  18. Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.

    PubMed

    Prochazka, Ivan; Panek, Petr

    2009-07-01

    A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.

  19. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  20. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  1. An adaptive interpolation scheme for molecular potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  2. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  3. Estimation of lunar surface maturity and ferrous oxide from Moon Mineralogy Mapper (M3) data through data interpolation techniques

    NASA Astrophysics Data System (ADS)

    Ajith Kumar, P.; Kumar, Shashi

    2016-04-01

    Surface maturity estimation of the lunar regolith revealed selenological process behind the formation of lunar surface, which might be provided vital information regarding the geological evolution of earth, because lunar surface is being considered as 8-9 times older than as that of the earth. Spectral reflectances data from Moon mineralogy mapper (M3), the hyperspectral sensor of chandrayan-1 coupled with the standard weight percentages of FeO from lunar returned samples of Apollo and Luna landing sites, through data interpolation techniques to generate the weight percentage FeO map of the target lunar locations. With the interpolated data mineral maps were prepared and the results are analyzed.

  4. A Critical Comparison of Some Methods for Interpolation of Scattered Data

    DTIC Science & Technology

    1979-12-01

    because faster evaluation of the local interpolants is possible. KAll things considered, the method of choice here seems to be the Modified Quadratic...topography and other irregular surfaces," J. of Geophysical Research 76 ( 1971 ) 1905-1915I’ [23) HARDY, Rolland L. - "Analytical topographic surfaces by

  5. An adaptive interpolation scheme for molecular potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less

  6. Development and evaluation of a new 3-D digitization and computer graphic system to study the anatomic tissue and restoration surfaces.

    PubMed

    Dastane, A; Vaidyanathan, T K; Vaidyanathan, J; Mehra, R; Hesby, R

    1996-01-01

    It is necessary to visualize and reconstruct tissue anatomic surfaces accurately for a variety of oral rehabilitation applications such as surface wear characterization and automated fabrication of dental restorations, accuracy of reproduction of impression and die materials, etc. In this investigation, a 3-D digitization and computer-graphic system was developed for surface characterization. The hardware consists of a profiler assembly for digitization in an MTS biomechanical test system with an artificial mouth, an IBM PS/2 computer model 70 for data processing and a Hewlett-Packard laser printer for hardcopy outputs. The software used includes a commercially available Surfer 3-D graphics package, a public domain data-fitting alignment software and an inhouse Pascal program for intercommunication plus some other limited tasks. Surfaces were digitized before and after rotation by angular displacement, the digital data were interpolated by Surfer to provide a data grid and the surfaces were computer graphically reconstructed: Misaligned surfaces were aligned by the data-fitting alignment software under different choices of parameters. The effect of different interpolation parameters (e.g. grid size, method of interpolation) and extent of rotation on the alignment accuracy was determined. The results indicate that improved alignment accuracy results from optimization of interpolation parameters and minimization of the initial misorientation between the digitized surfaces. The method provides important advantages for surface reconstruction and visualization, such as overlay of sequentially generated surfaces and accurate alignment of pairs of surfaces with small misalignment.

  7. The Effect of Elevation Bias in Interpolated Air Temperature Data Sets on Surface Warming in China During 1951-2015

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Sun, Fubao; Ge, Quansheng; Kleidon, Axel; Liu, Wenbin

    2018-02-01

    Although gridded air temperature data sets share much of the same observations, different rates of warming can be detected due to different approaches employed for considering elevation signatures in the interpolation processes. Here we examine the influence of varying spatiotemporal distribution of sites on surface warming in the long-term trend and over the recent warming hiatus period in China during 1951-2015. A suspicious cooling trend in raw interpolated air temperature time series is found in the 1950s, and 91% of which can be explained by the artificial elevation changes introduced by the interpolation process. We define the regression slope relating temperature difference and elevation difference as the bulk lapse rate of -5.6°C/km, which tends to be higher (-8.7°C/km) in dry regions but lower (-2.4°C/km) in wet regions. Compared to independent experimental observations, we find that the estimated monthly bulk lapse rates work well to capture the elevation bias. Significant improvement can be achieved in adjusting the interpolated original temperature time series using the bulk lapse rate. The results highlight that the developed bulk lapse rate is useful to account for the elevation signature in the interpolation of site-based surface air temperature to gridded data sets and is necessary for avoiding elevation bias in climate change studies.

  8. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  9. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  10. Relationships between response surfaces for tablet characteristics of placebo and API-containing tablets manufactured by direct compression method.

    PubMed

    Hayashi, Yoshihiro; Tsuji, Takahiro; Shirotori, Kaede; Oishi, Takuya; Kosugi, Atsushi; Kumada, Shungo; Hirai, Daijiro; Takayama, Kozo; Onuki, Yoshinori

    2017-10-30

    In this study, we evaluated the correlation between the response surfaces for the tablet characteristics of placebo and active pharmaceutical ingredient (API)-containing tablets. The quantities of lactose, cornstarch, and microcrystalline cellulose were chosen as the formulation factors. Ten tablet formulations were prepared. The tensile strength (TS) and disintegration time (DT) of tablets were measured as tablet characteristics. The response surfaces for TS and DT were estimated using a nonlinear response surface method incorporating multivariate spline interpolation, and were then compared with those of placebo tablets. A correlation was clearly observed for TS and DT of all APIs, although the value of the response surfaces for TS and DT was highly dependent on the type of API used. Based on this knowledge, the response surfaces for TS and DT of API-containing tablets were predicted from only two and four formulations using regression expression and placebo tablet data, respectively. The results from the evaluation of prediction accuracy showed that this method accurately predicted TS and DT, suggesting that it could construct a reliable response surface for TS and DT with a small number of samples. This technique assists in the effective estimation of the relationships between design variables and pharmaceutical responses during pharmaceutical development. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation

    NASA Astrophysics Data System (ADS)

    Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel

    2017-09-01

    Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.

  12. LADAR Range Image Interpolation Exploiting Pulse Width Expansion

    DTIC Science & Technology

    2012-03-22

    normal to each other. The LADAR model needs to include the complete BRDF model covered in Section 2.1.3, which includes speckle reflection as well as...the gradient of a surface. This study estimates the gradi- ent of the surface of an object from a modeled LADAR return pulse that includes accurate...probabilistic noise models . The range and surface gradient estimations are incorporated into a novel interpolator that facilitates an effective three

  13. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  14. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  15. Proceedings of the NASA Workshop on Surface Fitting

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1982-01-01

    Surface fitting techniques and their utilization are addressed. Surface representation, approximation, and interpolation are discussed. Along with statistical estimation problems associated with surface fitting.

  16. New gridded database of clear-sky solar radiation derived from ground-based observations over Europe

    NASA Astrophysics Data System (ADS)

    Bartok, Blanka; Wild, Martin; Sanchez-Lorenzo, Arturo; Hakuba, Maria Z.

    2017-04-01

    Since aerosols modify the entire energy balance of the climate system through different processes, assessments regarding aerosol multiannual variability are highly required by the climate modelling community. Because of the scarcity of long-term direct aerosol measurements, the retrieval of aerosol data/information from other type of observations or satellite measurements are very relevant. One approach frequently used in the literature is analyze of the clear-sky solar radiation which offer a better overview of changes in aerosol content. In the study first two empirical methods are elaborated in order to separate clear-sky situations from observed values of surface solar radiation available at the World Radiation Data Center (WRDC), St. Petersburg. The daily data has been checked for temporal homogeneity by applying the MASH method (Szentimrey, 2003). In the first approach, clear sky situations are detected based on clearness index, namely the ratio of the surface solar radiation to the extraterrestrial solar irradiation. In the second approach the observed values of surface solar radiation are compared to the climatology of clear-sky surface solar radiation calculated by the MAGIC radiation code (Muller et al. 2009). In both approaches the clear-sky radiation values highly depend on the applied thresholds. In order to eliminate this methodological error a verification of clear-sky detection is envisaged through a comparison with the values obtained by a high time resolution clear-sky detection and interpolation algorithm (Long and Ackermann, 2000) making use of the high quality data from the Baseline Surface Radiation Network (BSRN). As the consequences clear-sky data series are obtained for 118 European meteorological stations. Next a first attempt has been done in order to interpolate the point-wise clear-sky radiation data by applying the MISH (Meteorological Interpolation based on Surface Homogenized Data Basis) method for the spatial interpolation of surface meteorological elements developed at the Hungarian Meteorological Service (Szentimrey 2007). In this way new gridded database of clear-sky solar radiation is created suitable for further investigations regarding the role of aerosols in the energy budget, and also for validations of climate model outputs. References 1. Long CN, Ackerman TP. 2000. Identification of clear skies from broadband pyranometer measurements and calculation of downwelling shortwave cloud effects, J. Geophys. Res., 105(D12), 15609-15626, doi:10.1029/2000JD900077. 2. Mueller R, Matsoukas C, Gratzki A, Behr H, Hollmann R. 2009. The CM-SAF operational scheme for the satellite based retrieval of solar surface irradiance - a LUT based eigenvector hybrid approach, Remote Sensing of Environment, 113 (5), 1012-1024, doi:10.1016/j.rse.2009. 01.012 3. Szentimrey T. 2014. Multiple Analysis of Series for Homogenization (MASHv3.03), Hungarian Meteorological Service, https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/ 4. Szentimrey T. Bihari Z. 2014: Meteorological Interpolation based on Surface Homogenized Data Basis (MISHv1.03) https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/

  17. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2014-12-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  18. Response Requirement and Nature of Interpolated Stories in Retroactive Inhibition in Prose.

    ERIC Educational Resources Information Center

    Van Mondfrans, Adrian P.; And Others

    Retroactive inhibition, a loss of memory due to learning other materials between recall and exposure to the original materials, was investigated in relation to prose. Two variables were manipulated in the study: similarity of interpolated stories (dissimilar or similar), and the response requirements (completion-recall or multiple-choice). The 190…

  19. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  20. Development of a Boundary Layer Property Interpolation Tool in Support of Orbiter Return To Flight

    NASA Technical Reports Server (NTRS)

    Greene, Francis A.; Hamilton, H. Harris

    2006-01-01

    A new tool was developed to predict the boundary layer quantities required by several physics-based predictive/analytic methods that assess damaged Orbiter tile. This new tool, the Boundary Layer Property Prediction (BLPROP) tool, supplies boundary layer values used in correlations that determine boundary layer transition onset and surface heating-rate augmentation/attenuation factors inside tile gouges (i.e. cavities). BLPROP interpolates through a database of computed solutions and provides boundary layer and wall data (delta, theta, Re(sub theta)/M(sub e), Re(sub theta)/M(sub e), Re(sub theta), P(sub w), and q(sub w)) based on user input surface location and free stream conditions. Surface locations are limited to the Orbiter s windward surface. Constructed using predictions from an inviscid w/boundary-layer method and benchmark viscous CFD, the computed database covers the hypersonic continuum flight regime based on two reference flight trajectories. First-order one-dimensional Lagrange interpolation accounts for Mach number and angle-of-attack variations, whereas non-dimensional normalization accounts for differences between the reference and input Reynolds number. Employing the same computational methods used to construct the database, solutions at other trajectory points taken from previous STS flights were computed: these results validate the BLPROP algorithm. Percentage differences between interpolated and computed values are presented and are used to establish the level of uncertainty of the new tool.

  1. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  2. An operator calculus for surface and volume modeling

    NASA Technical Reports Server (NTRS)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  3. Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation

    PubMed Central

    Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed

    2012-01-01

    This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186

  4. Spatial Interpolation of Fine Particulate Matter Concentrations Using the Shortest Wind-Field Path Distance

    PubMed Central

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197

  5. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

    PubMed

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

  6. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  7. An Extended Kriging Method to Interpolate Near-Surface Soil Moisture Data Measured by Wireless Sensor Networks

    PubMed Central

    Zhang, Jialin; Li, Xiuhong; Yang, Rongjin; Liu, Qiang; Zhao, Long; Dou, Baocheng

    2017-01-01

    In the practice of interpolating near-surface soil moisture measured by a wireless sensor network (WSN) grid, traditional Kriging methods with auxiliary variables, such as Co-kriging and Kriging with external drift (KED), cannot achieve satisfactory results because of the heterogeneity of soil moisture and its low correlation with the auxiliary variables. This study developed an Extended Kriging method to interpolate with the aid of remote sensing images. The underlying idea is to extend the traditional Kriging by introducing spectral variables, and operating on spatial and spectral combined space. The algorithm has been applied to WSN-measured soil moisture data in HiWATER campaign to generate daily maps from 10 June to 15 July 2012. For comparison, three traditional Kriging methods are applied: Ordinary Kriging (OK), which used WSN data only, Co-kriging and KED, both of which integrated remote sensing data as covariate. Visual inspections indicate that the result from Extended Kriging shows more spatial details than that of OK, Co-kriging, and KED. The Root Mean Square Error (RMSE) of Extended Kriging was found to be the smallest among the four interpolation results. This indicates that the proposed method has advantages in combining remote sensing information and ground measurements in soil moisture interpolation. PMID:28617351

  8. A practical implementation of wave front construction for 3-D isotropic media

    NASA Astrophysics Data System (ADS)

    Chambers, K.; Kendall, J.-M.

    2008-06-01

    Wave front construction (WFC) methods are a useful tool for tracking wave fronts and are a natural extension to standard ray shooting methods. Here we describe and implement a simple WFC method that is used to interpolate wavefield properties throughout a 3-D heterogeneous medium. Our approach differs from previous 3-D WFC procedures primarily in the use of a ray interpolation scheme, based on approximating the wave front as a `locally spherical' surface and a `first arrival mode', which reduces computation times, where only first arrivals are required. Both of these features have previously been included in 2-D WFC algorithms; however, until now they have not been extended to 3-D systems. The wave front interpolation scheme allows for rays to be traced from a nearly arbitrary distribution of take-off angles, and the calculation of derivatives with respect to take-off angles is not required for wave front interpolation. However, in regions of steep velocity gradient, the locally spherical approximation is not valid, and it is necessary to backpropagate rays to a sufficiently homogenous region before interpolation of the new ray. Our WFC technique is illustrated using a realistic velocity model, based on a North Sea oil reservoir. We examine wavefield quantities such as traveltimes, ray angles, source take-off angles and geometrical spreading factors, all of which are interpolated on to a regular grid. We compare geometrical spreading factors calculated using two methods: using the ray Jacobian and by taking the ratio of a triangular area of wave front to the corresponding solid angle at the source. The results show that care must be taken when using ray Jacobians to calculate geometrical spreading factors, as the poles of the source coordinate system produce unreliable values, which can be spread over a large area, as only a few initial rays are traced in WFC. We also show that the use of the first arrival mode can reduce computation time by ~65 per cent, with the accuracy of the interpolated traveltimes, ray angles and source take-off angles largely unchanged. However, the first arrival mode does lead to inaccuracies in interpolated angles near caustic surfaces, as well as small variations in geometrical spreading factors for ray tubes that have passed through caustic surfaces.

  9. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  10. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    NASA Astrophysics Data System (ADS)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  11. VizieR Online Data Catalog: New atmospheric parameters of MILES cool stars (Sharma+, 2016)

    NASA Astrophysics Data System (ADS)

    Sharma, K.; Prugniel, P.; Singh, H. P.

    2015-11-01

    MILES V2 spectral interpolator The FITS file is an improved version of MILES interpolator previously presented in PVK. It contains the coefficients of the interpolator, which allows one to compute an interpolated spectrum, giving an effective temperature, log of surface gravity and metallicity (Teff, logg, and [Fe/H]). The file consists of three extensions containing the three temperature regimes described in the paper. Extension Teff range 0 warm 4000-9000K 1 hot >7000K 2 cold <4550K The three functions are linearly interpolated in the Teff overlapping regions. Each extension contains a 2D image-type array, whose first axis is the wavelength described by a WCS (Air wavelength, starting at 3536Å, step=0.9Å). This FITS file can be used by the ULySS v1.3 or higher. (5 data files).

  12. The effect of blurred plot coordinates on interpolating forest biomass: a case study

    Treesearch

    J. W. Coulston

    2004-01-01

    Interpolated surfaces of forest attributes are important analytical tools and have been used in risk assessments, forest inventories, and forest health assessments. The USDA Forest Service Forest Inventory and Analysis program (FIA) annually collects information on forest attributes in a consistent fashion nation-wide. Users of these data typically perform...

  13. An objective isobaric/isentropic technique for upper air analysis

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Endlich, R. M.; Ehernberger, L. J.

    1981-01-01

    An objective meteorological analysis technique is presented whereby both horizontal and vertical upper air analyses are performed. The process used to interpolate grid-point values from the upper-air station data is the same as for grid points on both an isobaric surface and a vertical cross-sectional plane. The nearby data surrounding each grid point are used in the interpolation by means of an anisotropic weighting scheme, which is described. The interpolation for a grid-point potential temperature is performed isobarically; whereas wind, mixing-ratio, and pressure height values are interpolated from data that lie on the isentropic surface that passes through the grid point. Two versions (A and B) of the technique are evaluated by qualitatively comparing computer analyses with subjective handdrawn analyses. The objective products of version A generally have fair correspondence with the subjective analyses and with the station data, and depicted the structure of the upper fronts, tropopauses, and jet streams fairly well. The version B objective products correspond more closely to the subjective analyses, and show the same strong gradients across the upper front with only minor smoothing.

  14. Using Chebyshev polynomial interpolation to improve the computational efficiency of gravity models near an irregularly-shaped asteroid

    NASA Astrophysics Data System (ADS)

    Hu, Shou-Cun; Ji, Jiang-Hui

    2017-12-01

    In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.

  15. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  16. A Residual Kriging method for the reconstruction of 3D high-resolution meteorological fields from airborne and surface observations

    NASA Astrophysics Data System (ADS)

    Laiti, Lavinia; Zardi, Dino; de Franceschi, Massimiliano; Rampanelli, Gabriele

    2013-04-01

    Manned light aircrafts and remotely piloted aircrafts represent very valuable and flexible measurement platforms for atmospheric research, as they are able to provide high temporal and spatial resolution observations of the atmosphere above the ground surface. In the present study the application of a geostatistical interpolation technique called Residual Kriging (RK) is proposed for the mapping of airborne measurements of scalar quantities over regularly spaced 3D grids. In RK the dominant (vertical) trend component underlying the original data is first extracted to filter out local anomalies, then the residual field is separately interpolated and finally added back to the trend; the determination of the interpolation weights relies on the estimate of the characteristic covariance function of the residuals, through the computation and modelling of their semivariogram function. RK implementation also allows for the inference of the characteristic spatial scales of variability of the target field and its isotropization, and for an estimate of the interpolation error. The adopted test-bed database consists in a series of flights of an instrumented motorglider exploring the atmosphere of two valleys near the city of Trento (in the southeastern Italian Alps), performed on fair-weather summer days. RK method is used to reconstruct fully 3D high-resolution fields of potential temperature and mixing ratio for specific vertical slices of the valley atmosphere, integrating also ground-based measurements from the nearest surface weather stations. From RK-interpolated meteorological fields, fine-scale features of the atmospheric boundary layer developing over the complex valley topography in connection with the occurrence of thermally-driven slope and valley winds, are detected. The performance of RK mapping is also tested against two other commonly adopted interpolation methods, i.e. the Inverse Distance Weighting and the Delaunay triangulation methods, comparing the results of a cross-validation procedure.

  17. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  18. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  19. Solution of the surface Euler equations for accurate three-dimensional boundary-layer analysis of aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Iyer, V.; Harris, J. E.

    1987-01-01

    The three-dimensional boundary-layer equations in the limit as the normal coordinate tends to infinity are called the surface Euler equations. The present paper describes an accurate method for generating edge conditions for three-dimensional boundary-layer codes using these equations. The inviscid pressure distribution is first interpolated to the boundary-layer grid. The surface Euler equations are then solved with this pressure field and a prescribed set of initial and boundary conditions to yield the velocities along the two surface coordinate directions. Results for typical wing and fuselage geometries are presented. The smoothness and accuracy of the edge conditions obtained are found to be superior to the conventional interpolation procedures.

  20. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    NASA Astrophysics Data System (ADS)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.

  1. Spatiotemporal Interpolation of Elevation Changes Derived from Satellite Altimetry for Jakobshavn Isbrae, Greenland

    NASA Technical Reports Server (NTRS)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sorensen, L. S.; Joughin, I. R.; Davis, C. H.; Krabill, W. B.

    2012-01-01

    Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dHdt) data over the area of interest. The largest dHdt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dHdt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbr, an outlet glacier for which widespread airborne validation data are available from NASAs Airborne Topographic Mapper (ATM). The four methods are ordinary kriging (OK), kriging with external drift (KED), where the spatial pattern of surface velocity is used as a proxy for that of dHdt, and their spatiotemporal equivalents (ST-OK and ST-KED).

  2. Creating a monthly time series of the potentiometric surface in the Upper Floridan aquifer, Northern Tampa Bay area, Florida, January 2000-December 2009

    USGS Publications Warehouse

    Lee, Terrie M.; Fouad, Geoffrey G.

    2014-01-01

    In Florida’s karst terrain, where groundwater and surface waters interact, a mapping time series of the potentiometric surface in the Upper Floridan aquifer offers a versatile metric for assessing the hydrologic condition of both the aquifer and overlying streams and wetlands. Long-term groundwater monitoring data were used to generate a monthly time series of potentiometric surfaces in the Upper Floridan aquifer over a 573-square-mile area of west-central Florida between January 2000 and December 2009. Recorded groundwater elevations were collated for 260 groundwater monitoring wells in the Northern Tampa Bay area, and a continuous time series of daily observations was created for 197 of the wells by estimating missing daily values through regression relations with other monitoring wells. Kriging was used to interpolate the monthly average potentiometric-surface elevation in the Upper Floridan aquifer over a decade. The mapping time series gives spatial and temporal coherence to groundwater monitoring data collected continuously over the decade by three different organizations, but at various frequencies. Further, the mapping time series describes the potentiometric surface beneath parts of six regionally important stream watersheds and 11 municipal well fields that collectively withdraw about 90 million gallons per day from the Upper Floridan aquifer. Monthly semivariogram models were developed using monthly average groundwater levels at wells. Kriging was used to interpolate the monthly average potentiometric-surface elevations and to quantify the uncertainty in the interpolated elevations. Drawdown of the potentiometric surface within well fields was likely the cause of a characteristic decrease and then increase in the observed semivariance with increasing lag distance. This characteristic made use of the hole effect model appropriate for describing the monthly semivariograms and the interpolated surfaces. Spatial variance reflected in the monthly semivariograms decreased markedly between 2002 and 2003, timing that coincided with decreases in well-field pumping. Cross-validation results suggest that the kriging interpolation may smooth over the drawdown of the potentiometric surface near production wells. The groundwater monitoring network of 197 wells yielded an average kriging error in the potentiometric-surface elevations of 2 feet or less over approximately 70 percent of the map area. Additional data collection within the existing monitoring network of 260 wells and near selected well fields could reduce the error in individual months. Reducing the kriging error in other areas would require adding new monitoring wells. Potentiometric-surface elevations fluctuated by as much as 30 feet over the study period, and the spatially averaged elevation for the entire surface rose by about 2 feet over the decade. Monthly potentiometric-surface elevations describe the lateral groundwater flow patterns in the aquifer and are usable at a variety of spatial scales to describe vertical groundwater recharge and discharge conditions for overlying surface-water features.

  3. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  4. Summary on several key techniques in 3D geological modeling.

    PubMed

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  5. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  6. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  7. Morphing of spatial objects in real time with interpolation by functions of radial and orthogonal basis

    NASA Astrophysics Data System (ADS)

    Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai

    2018-05-01

    The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.

  8. Definition and verification of a complex aircraft for aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1986-01-01

    Techniques are reviewed which are of value in CAD/CAM CFD studies of the geometries of new fighter aircraft. In order to refine the computations of the flows to take advantage of the computing power available from supercomputers, it is often necessary to interpolate the geometry of the mesh selected for the numerical analysis of the aircraft shape. Interpolating the geometry permits a higher level of detail in calculations of the flow past specific regions of a design. A microprocessor-based mathematics engine is described for fast image manipulation and rotation to verify that the interpolated geometry will correspond to the design geometry in order to ensure that the flow calculations will remain valid through the interpolation. Applications of the image manipulation system to verify geometrical representations with wire-frame and shaded-surface images are described.

  9. New families of interpolating type IIB backgrounds

    NASA Astrophysics Data System (ADS)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  10. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  11. Efficient and Adaptive Methods for Computing Accurate Potential Surfaces for Quantum Nuclear Effects: Applications to Hydrogen-Transfer Reactions.

    PubMed

    DeGregorio, Nicole; Iyengar, Srinivasan S

    2018-01-09

    We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen-bonded systems is demonstrated here.

  12. Summary on Several Key Techniques in 3D Geological Modeling

    PubMed Central

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029

  13. Surface electric fields for North America during historical geomagnetic storms

    USGS Publications Warehouse

    Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.

    2013-01-01

    To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.

  14. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Interference Effects in Schizophrenic Short-Term Memory

    ERIC Educational Resources Information Center

    Bauman, Edward; Kolisnyk, Eugene

    1976-01-01

    Assesses the effects of input and output interference on schizophrenic recall. Input interference is the interference resulting from the interpolation of items between presentation and recall of the probed item. Output interference is the interference resulting from the interpolation of responses between the presentation and recall of the probed…

  16. Space Weather Activities of IONOLAB Group: TEC Mapping

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.

    2009-04-01

    Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.

  17. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    NASA Astrophysics Data System (ADS)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  18. An Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Shkarayev, S.; Krashantisa, R.; Tessler, A.

    2004-01-01

    An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.

  19. Forecasted Flood Depth Grids Providing Early Situational Awareness to FEMA during the 2017 Atlantic Hurricane Season

    NASA Astrophysics Data System (ADS)

    Jones, M.; Longenecker, H. E., III

    2017-12-01

    The 2017 hurricane season brought the unprecedented landfall of three Category 4 hurricanes (Harvey, Irma and Maria). FEMA is responsible for coordinating the federal response and recovery efforts for large disasters such as these. FEMA depends on timely and accurate depth grids to estimate hazard exposure, model damage assessments, plan flight paths for imagery acquisition, and prioritize response efforts. In order to produce riverine or coastal depth grids based on observed flooding, the methodology requires peak crest water levels at stream gauges, tide gauges, high water marks, and best-available elevation data. Because peak crest data isn't available until the apex of a flooding event and high water marks may take up to several weeks for field teams to collect for a large-scale flooding event, final observed depth grids are not available to FEMA until several days after a flood has begun to subside. Within the last decade NOAA's National Weather Service (NWS) has implemented the Advanced Hydrologic Prediction Service (AHPS), a web-based suite of accurate forecast products that provide hydrograph forecasts at over 3,500 stream gauge locations across the United States. These forecasts have been newly implemented into an automated depth grid script tool, using predicted instead of observed water levels, allowing FEMA access to flood hazard information up to 3 days prior to a flooding event. Water depths are calculated from the AHPS predicted flood stages and are interpolated at 100m spacing along NHD hydrolines within the basin of interest. A water surface elevation raster is generated from these water depths using an Inverse Distance Weighted interpolation. Then, elevation (USGS NED 30m) is subtracted from the water surface elevation raster so that the remaining values represent the depth of predicted flooding above the ground surface. This automated process requires minimal user input and produced forecasted depth grids that were comparable to post-event observed depth grids and remote sensing-derived flood extents for the 2017 hurricane season. These newly available forecasted models were used for pre-event response planning and early estimated hazard exposure counts, allowing FEMA to plan for and stand up operations several days sooner than previously possible.

  20. Transactions of The Army Conference on Applied Mathematics and Computing (5th) Held in West Point, New York on 15-18 June 1987

    DTIC Science & Technology

    1988-03-01

    29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least

  1. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  2. Climate applications for NOAA 1/4° Daily Optimum Interpolation Sea Surface Temperature

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Banzon, P. V. F.; Liu, G.; Saha, K.; Wilson, C.; Stachniewicz, J. S.

    2015-12-01

    Few sea surface temperature (SST) datasets from satellites have the long temporal span needed for climate studies. The NOAA Daily Optimum Interpolation Sea Surface Temperature (DOISST) on a 1/4° grid, produced at National Centers for Environmental Information, is based primarily on SSTs from the Advanced Very High Resolution Radiometer (AVHRR), available from 1981 to the present. AVHRR data can contain biases, particularly when aerosols are present. Over the three decade span, the largest departure of AVHRR SSTs from buoy temperatures occurred during the Mt Pinatubo and El Chichon eruptions. Therefore, in DOISST, AVHRR SSTs are bias-adjusted to match in situ SSTs prior to interpolation. This produces a consistent time series of complete SST fields that is suitable for modelling and investigating local climate phenomena like El Nino or the Pacific warm blob in a long term context. Because many biological processes and animal distributions are temperature dependent, there are also many ecological uses of DOISST (e.g., coral bleaching thermal stress, fish and marine mammal distributions), thereby providing insights into resource management in a changing ocean. The advantages and limitations of using DOISST for different applications will be discussed.

  3. Using geographical information systems and cartograms as a health service quality improvement tool.

    PubMed

    Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek

    2014-07-01

    Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  5. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  6. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  7. Evaluation of rainfall structure on hydrograph simulation: Comparison of radar and interpolated methods, a study case in a tropical catchment

    NASA Astrophysics Data System (ADS)

    Velasquez, N.; Ochoa, A.; Castillo, S.; Hoyos Ortiz, C. D.

    2017-12-01

    The skill of river discharge simulation using hydrological models strongly depends on the quality and spatio-temporal representativeness of precipitation during storm events. All precipitation measurement strategies have their own strengths and weaknesses that translate into discharge simulation uncertainties. Distributed hydrological models are based on evolving rainfall fields in the same time scale as the hydrological simulation. In general, rainfall measurements from a dense and well maintained rain gauge network provide a very good estimation of the total volume for each rainfall event, however, the spatial structure relies on interpolation strategies introducing considerable uncertainty in the simulation process. On the other hand, rainfall retrievals from radar reflectivity achieve a better spatial structure representation but with higher uncertainty in the surface precipitation intensity and volume depending on the vertical rainfall characteristics and radar scan strategy. To assess the impact of both rainfall measurement methodologies on hydrological simulations, and in particular the effects of the rainfall spatio-temporal variability, a numerical modeling experiment is proposed including the use of a novel QPE (Quantitative Precipitation Estimation) method based on disdrometer data in order to estimate surface rainfall from radar reflectivity. The experiment is based on the simulation of 84 storms, the hydrological simulations are carried out using radar QPE and two different interpolation methods (IDW and TIN), and the assessment of simulated peak flow. Results show significant rainfall differences between radar QPE and the interpolated fields, evidencing a poor representation of storms in the interpolated fields, which tend to miss the precise location of the intense precipitation cores, and to artificially generate rainfall in some areas of the catchment. Regarding streamflow modelling, the potential improvement achieved by using radar QPE depends on the density of the rain gauge network and its distribution relative to the precipitation events. The results for the 84 storms show a better model skill using radar QPE than the interpolated fields. Results using interpolated fields are highly affected by the dominant rainfall type and the basin scale.

  8. Method of Determining the Aerodynamic Characteristics of a Flying Vehicle from the Surface Pressure

    NASA Astrophysics Data System (ADS)

    Volkov, V. F.; Dyad'kin, A. A.; Zapryagaev, V. I.; Kiselev, N. P.

    2017-11-01

    The paper presents a description of the procedure used for determining the aerodynamic characteristics (forces and moments acting on a model of a flying vehicle) obtained from the results of pressure measurements on the surface of a model of a re-entry vehicle with operating retrofire brake rockets in the regime of hovering over a landing surface is given. The algorithm for constructing the interpolation polynomial over interpolation nodes in the radial and azimuthal directions using the assumption on the symmetry of pressure distribution over the surface is presented. The aerodynamic forces and moments at different tilts of the vehicle are obtained. It is shown that the aerodynamic force components acting on the vehicle in the regime of landing and caused by the action of the vertical velocity deceleration nozzle jets are negligibly small in comparison with the engine thrust.

  9. Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014

    NASA Astrophysics Data System (ADS)

    Junod, R.; Christy, J. R.

    2016-12-01

    Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).

  10. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  11. Digital surfaces and thicknesses of selected hydrogeologic units of the Floridan aquifer system in Florida and parts of Georgia, Alabama, and South Carolina

    USGS Publications Warehouse

    Williams, Lester J.; Dixon, Joann F.

    2015-01-01

    Digital surfaces and thicknesses of selected hydrogeologic units of the Floridan aquifer system were developed to define an updated hydrogeologic framework as part of the U.S. Geological Survey Groundwater Resources Program. The dataset contains structural surfaces depicting the top and base of the aquifer system, its major and minor hydrogeologic units and zones, geophysical marker horizons, and the altitude of the 10,000-milligram-per-liter total dissolved solids boundary that defines the approximate fresh and saline parts of the aquifer system. The thicknesses of selected major and minor units or zones were determined by interpolating points of known thickness or from raster surface subtraction of the structural surfaces. Additional data contained include clipping polygons; regional polygon features that represent geologic or hydrogeologic aspects of the aquifers and the minor units or zones; data points used in the interpolation; and polygon and line features that represent faults, boundaries, and other features in the aquifer system.

  12. Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime

    DTIC Science & Technology

    2014-10-31

    molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we

  13. A projection method for coupling two-phase VOF and fluid structure interaction simulations

    NASA Astrophysics Data System (ADS)

    Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro

    2018-02-01

    The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.

  14. The ARM Best Estimate 2-dimensional Gridded Surface

    DOE Data Explorer

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  15. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  16. Daily air temperature interpolated at high spatial resolution over a large mountainous region

    USGS Publications Warehouse

    Dodson, R.; Marks, D.

    1997-01-01

    Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.

  17. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  18. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  19. Ab initio potential-energy surfaces for complex, multichannel systems using modified novelty sampling and feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.

    2005-02-01

    A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.

  20. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1984-01-01

    The objective of this investigation is to develop a state-of-the-art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies. A three-dimensional multivariate O/I analysis scheme has been developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  1. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    The development of a state of the art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies was investigated. A three dimensional multivariate O/I analysis scheme was developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  2. Probabilistic surface reconstruction from multiple data sets: An example for the Australian Moho

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Salmon, M.; Kennett, B. L. N.; Sambridge, M.

    2012-10-01

    Interpolation of spatial data is a widely used technique across the Earth sciences. For example, the thickness of the crust can be estimated by different active and passive seismic source surveys, and seismologists reconstruct the topography of the Moho by interpolating these different estimates. Although much research has been done on improving the quantity and quality of observations, the interpolation algorithms utilized often remain standard linear regression schemes, with three main weaknesses: (1) the level of structure in the surface, or smoothness, has to be predefined by the user; (2) different classes of measurements with varying and often poorly constrained uncertainties are used together, and hence it is difficult to give appropriate weight to different data types with standard algorithms; (3) there is typically no simple way to propagate uncertainties in the data to uncertainty in the estimated surface. Hence the situation can be expressed by Mackenzie (2004): "We use fantastic telescopes, the best physical models, and the best computers. The weak link in this chain is interpreting our data using 100 year old mathematics". Here we use recent developments made in Bayesian statistics and apply them to the problem of surface reconstruction. We show how the reversible jump Markov chain Monte Carlo (rj-McMC) algorithm can be used to let the degree of structure in the surface be directly determined by the data. The solution is described in probabilistic terms, allowing uncertainties to be fully accounted for. The method is illustrated with an application to Moho depth reconstruction in Australia.

  3. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  4. Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.

    PubMed

    Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M

    2012-02-01

    We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  6. Investigation of dynamic characteristics of a rotor system with surface coatings

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cao, Dengqing; Wang, Deyou

    2017-02-01

    A Jeffcott rotor system with surface coatings capable of describing the mechanical vibration resulting from unbalance and rub-impact is formulated in this article. A contact force model proposed recently to describe the impact force between the disc and casing with coatings is employed to do the dynamic analysis for the rotor system with rubbing fault. Due to the variation of penetration, the contact force model is correspondingly modified. Meanwhile, the Coulomb friction model is applied to simulate the friction characteristics. Then, the case study of rub-impact with surface coatings is simulated by the Runge-Kutta method, in which a linear interpolation method is adopted to predict the rubbing instant. Moreover, the dynamic characteristics of the rotor system with surface coatings are analyzed in terms of bifurcation plot, waveform, whirl orbit, Poincaré map and spectrum plot. And the effects of the hardness of surface coatings on the response are investigated as well. Finally, compared with the classical models, the modified contact force model is shown to be more suitable to solve the rub-impact of aero-engine with surface coatings.

  7. A fast and accurate dihedral interpolation loop subdivision scheme

    NASA Astrophysics Data System (ADS)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  8. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  9. Quantifying Groundwater Fluctuations in the Southern High Plains with GIS and Geostatistics

    NASA Astrophysics Data System (ADS)

    Whitehead, B.

    2008-12-01

    Groundwater as a dwindling non-renewable natural resource has been an important research theme in agricultural studies coupled with human-environment interaction. This research incorporated contemporary Geographic Information System (GIS) methodologies and a universal kriging interpolator (geostatistics) to develop depth to groundwater surfaces for the southern portion of the High Plains, or Ogallala, aquifer. The variations in the interpolated surfaces were used to calculate the volume of water mined from the aquifer from 1980 to 2005. The findings suggest a nearly inverse relationship to the water withdrawal scenarios derived by the United States Geological Survey (USGS) during the Regional Aquifer System Analysis (RASA) performed in the early 1980's. These results advocate further research into regional climate change, groundwater-surface water interaction, and recharge mechanisms in the region, and provide a substantial contribution to the continuing and contentious issue concerning the environmental sustainability of the High Plains.

  10. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    PubMed

    Liu, Wei; Du, Peijun; Wang, Dongchen

    2015-01-01

    One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  11. Development of the Navy’s Next-Generation Nonhydrostatic Modeling System

    DTIC Science & Technology

    2013-09-30

    e.g. surface roughness, land- sea mask, surface albedo ) are needed by physical parameterizations. The surface values will be read and interpolated...characteristics (e.g. albedo , surface roughness) is now available to the model during the initialization stage. We have added infrastructure to the...six faces (Fig 3). 4 Figure 3: Topography (top left, in meters), surface roughness (top right, in meters), albedo (bottom left, no units

  12. Multivariate optimum interpolation of surface pressure and winds over oceans

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.

    1984-01-01

    The observations of surface pressure are quite sparse over oceanic areas. An effort to improve the analysis of surface pressure over oceans through the development of a multivariate surface analysis scheme which makes use of surface pressure and wind data is discussed. Although the present research used ship winds, future versions of this analysis scheme could utilize winds from additional sources, such as satellite scatterometer data.

  13. Digital elevation modeling via curvature interpolation for lidar data

    USDA-ARS?s Scientific Manuscript database

    Digital elevation model (DEM) is a three-dimensional (3D) representation of a terrain's surface - for a planet (including Earth), moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-p...

  14. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  15. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  16. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  17. Software for C1 interpolation

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1977-01-01

    The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.

  18. Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo

    NASA Astrophysics Data System (ADS)

    Kosempa, M.; Chambers, D. P.

    2016-02-01

    Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?

  19. High-Throughput Assay Optimization and Statistical Interpolation of Rubella-Specific Neutralizing Antibody Titers

    PubMed Central

    Lambert, Nathaniel D.; Pankratz, V. Shane; Larrabee, Beth R.; Ogee-Nwankwo, Adaeze; Chen, Min-hsin; Icenogle, Joseph P.

    2014-01-01

    Rubella remains a social and economic burden due to the high incidence of congenital rubella syndrome (CRS) in some countries. For this reason, an accurate and efficient high-throughput measure of antibody response to vaccination is an important tool. In order to measure rubella-specific neutralizing antibodies in a large cohort of vaccinated individuals, a high-throughput immunocolorimetric system was developed. Statistical interpolation models were applied to the resulting titers to refine quantitative estimates of neutralizing antibody titers relative to the assayed neutralizing antibody dilutions. This assay, including the statistical methods developed, can be used to assess the neutralizing humoral immune response to rubella virus and may be adaptable for assessing the response to other viral vaccines and infectious agents. PMID:24391140

  20. A case study of aerosol data assimilation with the Community Multi-scale Air Quality Model over the contiguous United States using 3D-Var and optimal interpolation methods

    NASA Astrophysics Data System (ADS)

    Tang, Youhua; Pagowski, Mariusz; Chai, Tianfeng; Pan, Li; Lee, Pius; Baker, Barry; Kumar, Rajesh; Delle Monache, Luca; Tong, Daniel; Kim, Hyun-Cheol

    2017-12-01

    This study applies the Gridpoint Statistical Interpolation (GSI) 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP), to improve surface PM2.5 predictions over the contiguous United States (CONUS) by assimilating aerosol optical depth (AOD) and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ) modeling system. An optimal interpolation (OI) method implemented earlier (Tang et al., 2015) for the CMAQ modeling system is also tested for the same period (July 2011) over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter < 2.5 µm) leads to stronger increments in surface PM2.5 compared to its MODIS AOD assimilation at the 550 nm wavelength. In contrast, we find a stronger OI impact of the MODIS AOD on surface aerosols at 18:00 UTC compared to the surface PM2.5 OI method. GSI produces smoother result and yields overall better correlation coefficient and root mean squared error (RMSE). It should be noted that the 3D-Var and OI methods used here have several big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.

  1. Calibration method of microgrid polarimeters with image interpolation.

    PubMed

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  2. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  3. Effects of selective attention on perceptual filling-in.

    PubMed

    De Weerd, P; Smith, E; Greenberg, P

    2006-03-01

    After few seconds, a figure steadily presented in peripheral vision becomes perceptually filled-in by its background, as if it "disappeared". We report that directing attention to the color, shape, or location of a figure increased the probability of perceiving filling-in compared to unattended figures, without modifying the time required for filling-in. This effect could be augmented by boosting attention. Furthermore, the frequency distribution of filling-in response times for attended figures could be predicted by multiplying the frequencies of response times for unattended figures with a constant. We propose that, after failure of figure-ground segregation, the neural interpolation processes that produce perceptual filling-in are enhanced in attended figure regions. As filling-in processes are involved in surface perception, the present study demonstrates that even very early visual processes are subject to modulation by cognitive factors.

  4. Nearly arc-length tool path generation and tool radius compensation algorithm research in FTS turning

    NASA Astrophysics Data System (ADS)

    Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao

    2014-08-01

    In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.

  5. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  6. Distributed snow modeling suitable for use with operational data for the American River watershed.

    NASA Astrophysics Data System (ADS)

    Shamir, E.; Georgakakos, K. P.

    2004-12-01

    The mountainous terrain of the American River watershed (~4300 km2) at the Western slope of the Northern Sierra Nevada is subject to significant variability in the atmospheric forcing that controls the snow accumulation and ablations processes (i.e., precipitation, surface temperature, and radiation). For a hydrologic model that attempts to predict both short- and long-term streamflow discharges, a plausible description of the seasonal and intermittent winter snow pack accumulation and ablation is crucial. At present the NWS-CNRFC operational snow model is implemented in a semi distributed manner (modeling unit of about 100-1000 km2) and therefore lump distinct spatial variability of snow processes. In this study we attempt to account for the precipitation, temperature, and radiation spatial variability by constructing a distributed snow accumulation and melting model suitable for use with commonly available sparse data. An adaptation of the NWS-Snow17 energy and mass balance that is used operationally at the NWS River Forecast Centers is implemented at 1 km2 grid cells with distributed input and model parameters. The input to the model (i.e., precipitation and surface temperature) is interpolated from observed point data. The surface temperature was interpolated over the basin based on adiabatic lapse rates using topographic information whereas the precipitation was interpolated based on maps of climatic mean annual rainfall distribution acquired from PRISM. The model parameters that control the melting rate due to radiation were interpolated based on aspect. The study was conducted for the entire American basin for the snow seasons of 1999-2000. Validation of the Snow Water Equivalent (SWE) prediction is done by comparing to observation from 12 snow Sensors. The Snow Cover Area (SCA) prediction was evaluated by comparing to remotely sensed 500m daily snow cover derived from MODIS. The results that the distribution of snow over the area is well captured and the quantity compared to the snow gauges are well estimated in the high elevation.

  7. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  8. Quantifying Libya-4 Surface Reflectance Heterogeneity With WorldView-1, 2 and EO-1 Hyperion

    NASA Technical Reports Server (NTRS)

    Neigh, Christopher S. R.; McCorkel, Joel; Middleton, Elizabeth M.

    2015-01-01

    The land surface imaging (LSI) virtual constellation approach promotes the concept of increasing Earth observations from multiple but disparate satellites. We evaluated this through spectral and spatial domains, by comparing surface reflectance from 30-m Hyperion and 2-m resolution WorldView-2 (WV-2) data in the Libya-4 pseudoinvariant calibration site. We convolved and resampled Hyperion to WV-2 bands using both cubic convolution and nearest neighbor (NN) interpolation. Additionally, WV-2 and WV-1 same-date imagery were processed as a cross-track stereo pair to generate a digital terrain model to evaluate the effects from large (>70 m) linear dunes. Agreement was moderate to low on dune peaks between WV-2 and Hyperion (R2 <; 0.4) but higher in areas of lower elevation and slope (R2 > 0.6). Our results provide a satellite sensor intercomparison protocol for an LSI virtual constellation at high spatial resolution, which should start with geolocation of pixels, followed by NN interpolation to avoid tall dunes that enhance surface reflectance differences across this internationally utilized site.

  9. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    NASA Astrophysics Data System (ADS)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  10. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  11. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  12. Collection, processing and error analysis of Terrestrial Laser Scanning data from fluvial gravel surfaces

    NASA Astrophysics Data System (ADS)

    Hodge, R.; Brasington, J.; Richards, K.

    2009-04-01

    The ability to collect 3D elevation data at mm-resolution from in-situ natural surfaces, such as fluvial and coastal sediments, rock surfaces, soils and dunes, is beneficial for a range of geomorphological and geological research. From these data the properties of the surface can be measured, and Digital Terrain Models (DTM) can be constructed. Terrestrial Laser Scanning (TLS) can collect quickly such 3D data with mm-precision and mm-spacing. This paper presents a methodology for the collection and processing of such TLS data, and considers how the errors in this TLS data can be quantified. TLS has been used to collect elevation data from fluvial gravel surfaces. Data were collected from areas of approximately 1 m2, with median grain sizes ranging from 18 to 63 mm. Errors are inherent in such data as a result of the precision of the TLS, and the interaction of factors including laser footprint, surface topography, surface reflectivity and scanning geometry. The methodology for the collection and processing of TLS data from complex surfaces like these fluvial sediments aims to minimise the occurrence of, and remove, such errors. The methodology incorporates taking scans from multiple scanner locations, averaging repeat scans, and applying a series of filters to remove erroneous points. Analysis of 2.5D DTMs interpolated from the processed data has identified geomorphic properties of the gravel surfaces, including the distribution of surface elevations, preferential grain orientation and grain imbrication. However, validation of the data and interpolated DTMs is limited by the availability of techniques capable of collecting independent elevation data of comparable quality. Instead, two alternative approaches to data validation are presented. The first consists of careful internal validation to optimise filter parameter values during data processing combined with a series of laboratory experiments. In the experiments, TLS data were collected from a sphere and planes with different reflectivities to measure the accuracy and precision of TLS data of these geometrically simple objects. Whilst this first approach allows the maximum precision of TLS data from complex surfaces to be estimated, it cannot quantify the distribution of errors within the TLS data and across the interpolated DTMs. The second approach enables this by simulating the collection of TLS data from complex surfaces of a known geometry. This simulated scanning has been verified through systematic comparison with laboratory TLS data. Two types of surface geometry have been investigated: simulated regular arrays of uniform spheres used to analyse the effect of sphere size; and irregular beds of spheres with the same grain size distribution as the fluvial gravels, which provide a comparable complex geometry to the field sediment surfaces. A series of simulated scans of these surfaces has enabled the magnitude and spatial distribution of errors in the interpolated DTMs to be quantified, as well as demonstrating the utility of the different processing stages in removing errors from TLS data. As well as demonstrating the application of simulated scanning as a technique to quantify errors, these results can be used to estimate errors in comparable TLS data.

  13. BOREAS Derived Surface Meteorological Data

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Twine, Tracy; Rinker, Donald; Knapp, David

    2000-01-01

    In 1995, the BOREAS science teams identified the need for a continuous surface meteorological and radiation data set to support flux and surface process modeling efforts. This data set contains actual, substituted, and interpolated 15-minute meteorological and radiation data compiled from several surface measurements sites over the BOREAS SSA and NSA. Temporally, the data cover 01-Jan-1994 to 31-Dec-1996. The data are stored in tabular ASCII files, and are classified as AFM-Staff data.

  14. Use of geostatistics for remediation planning to transcend urban political boundaries.

    PubMed

    Milillo, Tammy M; Sinha, Gaurav; Gardella, Joseph A

    2012-11-01

    Soil remediation plans are often dictated by areas of jurisdiction or property lines instead of scientific information. This study exemplifies how geostatistically interpolated surfaces can substantially improve remediation planning. Ordinary kriging, ordinary co-kriging, and inverse distance weighting spatial interpolation methods were compared for analyzing surface and sub-surface soil sample data originally collected by the US EPA and researchers at the University at Buffalo in Hickory Woods, an industrial-residential neighborhood in Buffalo, NY, where both lead and arsenic contamination is present. Past clean-up efforts estimated contamination levels from point samples, but parcel and agency jurisdiction boundaries were used to define remediation sites, rather than geostatistical models estimating the spatial behavior of the contaminants in the soil. Residents were understandably dissatisfied with the arbitrariness of the remediation plan. In this study we show how geostatistical mapping and participatory assessment can make soil remediation scientifically defensible, socially acceptable, and economically feasible. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A Thermo-Optic Propagation Modeling Capability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developedmore » for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.« less

  16. Fast Shepard interpolation on graphics processing units: potential energy surfaces and dynamics for H + CH4 → H2 + CH3.

    PubMed

    Welsch, Ralph; Manthe, Uwe

    2013-04-28

    A strategy for the fast evaluation of Shepard interpolated potential energy surfaces (PESs) utilizing graphics processing units (GPUs) is presented. Speed ups of several orders of magnitude are gained for the title reaction on the ZFWCZ PES [Y. Zhou, B. Fu, C. Wang, M. A. Collins, and D. H. Zhang, J. Chem. Phys. 134, 064323 (2011)]. Thermal rate constants are calculated employing the quantum transition state concept and the multi-layer multi-configurational time-dependent Hartree approach. Results for the ZFWCZ PES are compared to rate constants obtained for other ab initio PESs and problems are discussed. A revised PES is presented. Thermal rate constants obtained for the revised PES indicate that an accurate description of the anharmonicity around the transition state is crucial.

  17. The cancellous bone multiscale morphology-elasticity relationship.

    PubMed

    Agić, Ante; Nikolić, Vasilije; Mijović, Budimir

    2006-06-01

    The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.

  18. Sensitivity of Global Methane Bayesian Inversion to Surface Observation Data Sets and Chemical-Transport Model Resolution

    NASA Astrophysics Data System (ADS)

    Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.

    2017-12-01

    Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby increasing the information at that site. These different inversions—event-level and interpolated data, higher and lower resolutions—are compared using an ensemble of descriptive and comparative statistics. Analyzing the sensitivity of the inverse model leads to more accurate estimates of the methane source category uncertainty.

  19. Multilevel Green's function interpolation method for scattering from composite metallic and dielectric objects.

    PubMed

    Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou

    2008-10-01

    A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).

  20. Spatial Interpolation of Reference Evapotranspiration in India: Comparison of IDW and Kriging Methods

    NASA Astrophysics Data System (ADS)

    Hodam, Sanayanbi; Sarkar, Sajal; Marak, Areor G. R.; Bandyopadhyay, A.; Bhadra, A.

    2017-12-01

    In the present study, to understand the spatial distribution characteristics of the ETo over India, spatial interpolation was performed on the means of 32 years (1971-2002) monthly data of 131 India Meteorological Department stations uniformly distributed over the country by two methods, namely, inverse distance weighted (IDW) interpolation and kriging. Kriging was found to be better while developing the monthly surfaces during cross-validation. However, in station-wise validation, IDW performed better than kriging in almost all the cases, hence is recommended for spatial interpolation of ETo and its governing meteorological parameters. This study also checked if direct kriging of FAO-56 Penman-Monteith (PM) (Allen et al. in Crop evapotranspiration—guidelines for computing crop water requirements, Irrigation and drainage paper 56, Food and Agriculture Organization of the United Nations (FAO), Rome, 1998) point ETo produced comparable results against ETo estimated with individually kriged weather parameters (indirect kriging). Indirect kriging performed marginally well compared to direct kriging. Point ETo values were extended to areal ETo values by IDW and FAO-56 PM mean ETo maps for India were developed to obtain sufficiently accurate ETo estimates at unknown locations.

  1. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    NASA Astrophysics Data System (ADS)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.

  2. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  3. Estimated Depth to Ground Water and Configuration of the Water Table in the Portland, Oregon Area

    USGS Publications Warehouse

    Snyder, Daniel T.

    2008-01-01

    Reliable information on the configuration of the water table in the Portland metropolitan area is needed to address concerns about various water-resource issues, especially with regard to potential effects from stormwater injection systems such as UIC (underground injection control) systems that are either existing or planned. To help address these concerns, this report presents the estimated depth-to-water and water-table elevation maps for the Portland area, along with estimates of the relative uncertainty of the maps and seasonal water-table fluctuations. The method of analysis used to determine the water-table configuration in the Portland area relied on water-level data from shallow wells and surface-water features that are representative of the water table. However, the largest source of available well data is water-level measurements in reports filed by well constructors at the time of new well installation, but these data frequently were not representative of static water-level conditions. Depth-to-water measurements reported in well-construction records generally were shallower than measurements by the U.S. Geological Survey (USGS) in the same or nearby wells, although many depth-to-water measurements were substantially deeper than USGS measurements. Magnitudes of differences in depth-to-water measurements reported in well records and those measured by the USGS in the same or nearby wells ranged from -119 to 156 feet with a mean of the absolute value of the differences of 36 feet. One possible cause for the differences is that water levels in many wells reported in well records were not at equilibrium at the time of measurement. As a result, the analysis of the water-table configuration relied on water levels measured during the current study or used in previous USGS investigations in the Portland area. Because of the scarcity of well data in some areas, the locations of select surface-water features including major rivers, streams, lakes, wetlands, and springs representative of where the water table is at land surface were used to augment the analysis. Ground-water and surface-water data were combined for use in interpolation of the water-table configuration. Interpolation of the two representations typically used to define water-table position - depth to the water table below land surface and elevation of the water table above a datum - can produce substantially different results and may represent the end members of a spectrum of possible interpolations largely determined by the quantity of recharge and the hydraulic properties of the aquifer. Datasets of depth-to-water and water-table elevation for the current study were interpolated independently based on kriging as the method of interpolation with parameters determined through the use of semivariograms developed individually for each dataset. Resulting interpolations were then combined to create a single, averaged representation of the water-table configuration. Kriging analysis also was used to develop a map of relative uncertainty associated with the values of the water-table position. Accuracy of the depth-to-water and water-table elevation maps is dependent on various factors and assumptions pertaining to the data, the method of interpolation, and the hydrogeologic conditions of the surficial aquifers in the study area. Although the water-table configuration maps generally are representative of the conditions in the study area, the actual position of the water-table may differ from the estimated position at site-specific locations, and short-term, seasonal, and long-term variations in the differences also can be expected. The relative uncertainty map addresses some but not all possible errors associated with the analysis of the water-table configuration and does not depict all sources of uncertainty. Depth to water greater than 300 feet in the Portland area is limited to parts of the Tualatin Mountains, the foothills of the Cascade Range, and muc

  4. An adaptive multi-moment FVM approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Hu, Changhong

    2018-04-01

    In this study, a multi-moment finite volume method (FVM) based on block-structured adaptive Cartesian mesh is proposed for simulating incompressible flows. A conservative interpolation scheme following the idea of the constrained interpolation profile (CIP) method is proposed for the prolongation operation of the newly created mesh. A sharp immersed boundary (IB) method is used to model the immersed rigid body. A moving least squares (MLS) interpolation approach is applied for reconstruction of the velocity field around the solid surface. An efficient method for discretization of Laplacian operators on adaptive meshes is proposed. Numerical simulations on several test cases are carried out for validation of the proposed method. For the case of viscous flow past an impulsively started cylinder (Re = 3000 , 9500), the computed surface vorticity coincides with the result of the body-fitted method. For the case of a fast pitching NACA 0015 airfoil at moderate Reynolds numbers (Re = 10000 , 45000), the predicted drag coefficient (CD) and lift coefficient (CL) agree well with other numerical or experimental results. For 2D and 3D simulations of viscous flow past a pitching plate with prescribed motions (Re = 5000 , 40000), the predicted CD, CL and CM (moment coefficient) are in good agreement with those obtained by other numerical methods.

  5. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  6. Scaling a Human Body Finite Element Model with Radial Basis Function Interpolation

    DTIC Science & Technology

    Human body models are currently used to evaluate the body’s response to a variety of threats to the Soldier. The ability to adjust the size of human...body models is currently limited because of the complex shape changes that are required. Here, a radial basis function interpolation method is used to...morph the shape on an existing finite element mesh. Tools are developed and integrated into the Blender computer graphics software to assist with

  7. Surface filling-in and contour interpolation contribute independently to Kanizsa figure formation.

    PubMed

    Chen, Siyi; Glasauer, Stefan; Müller, Hermann J; Conci, Markus

    2018-04-30

    To explore mechanisms of object integration, the present experiments examined how completion of illusory contours and surfaces modulates the sensitivity of localizing a target probe. Observers had to judge whether a briefly presented dot probe was located inside or outside the region demarcated by inducer elements that grouped to form variants of an illusory, Kanizsa-type figure. From the resulting psychometric functions, we determined observers' discrimination thresholds as a sensitivity measure. Experiment 1 showed that sensitivity was systematically modulated by the amount of surface and contour completion afforded by a given configuration. Experiments 2 and 3 presented stimulus variants that induced an (occluded) object without clearly defined bounding contours, which gave rise to a relative sensitivity increase for surface variations on their own. Experiments 4 and 5 were performed to rule out that these performance modulations were simply attributable to variable distances between critical local inducers or to costs in processing an interrupted contour. Collectively, the findings provide evidence for a dissociation between surface and contour processing, supporting a model of object integration in which completion is instantiated by feedforward processing that independently renders surface filling-in and contour interpolation and a feedback loop that integrates these outputs into a complete whole. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. [Spatial pattern of land surface dead combustible fuel load in Huzhong forest area in Great Xing'an Mountains].

    PubMed

    Liu, Zhi-Hua; Chang, Yu; Chen, Hong-Wei; Zhou, Rui; Jing, Guo-Zhi; Zhang, Hong-Xin; Zhang, Chang-Meng

    2008-03-01

    By using geo-statistics and based on time-lag classification standard, a comparative study was made on the land surface dead combustible fuels in Huzhong forest area in Great Xing'an Mountains. The results indicated that the first level land surface dead combustible fuel, i. e., 1 h time-lag dead fuel, presented stronger spatial auto-correlation, with an average of 762.35 g x m(-2) and contributing to 55.54% of the total load. Its determining factors were species composition and stand age. The second and third levels land surface dead combustible fuel, i. e., 10 h and 100 h time-lag dead fuels, had a sum of 610.26 g x m(-2), and presented weaker spatial auto-correlation than 1 h time-lag dead fuel. Their determining factor was the disturbance history of forest stand. The complexity and heterogeneity of the factors determining the quality and quantity of forest land surface dead combustible fuels were the main reasons for the relatively inaccurate interpolation. However, the utilization of field survey data coupled with geo-statistics could easily and accurately interpolate the spatial pattern of forest land surface dead combustible fuel loads, and indirectly provide a practical basis for forest management.

  9. Multivariate optimum interpolation of surface pressure and surface wind over oceans

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.; Baker, W. E.; Nestler, M. S.

    1984-01-01

    The present multivariate analysis method for surface pressure and winds incorporates ship wind observations into the analysis of surface pressure. For the specific case of 0000 GMT, on February 3, 1979, the additional data resulted in a global rms difference of 0.6 mb; individual maxima as larse as 5 mb occurred over the North Atlantic and East Pacific Oceans. These differences are noted to be smaller than the analysis increments to the first-guess fields.

  10. Heat waves measured with MODIS land surface temperature data predict changes in avian community structure

    Treesearch

    Thomas P. Albright; Anna M. Pidgeon; Chadwick D. Rittenhouse; Murray K. Clayton; Curtis H. Flather; Patrick D. Culbert; Volker C. Radeloff

    2011-01-01

    Heat waves are expected to become more frequent and severe as climate changes, with unknown consequences for biodiversity. We sought to identify ecologically-relevant broad-scale indicators of heat waves based on MODIS land surface temperature (LST) and interpolated air temperature data and assess their associations with avian community structure. Specifically, we...

  11. Diseases Burden of Chronic Obstructive Pulmonary Disease (COPD) Attributable to Ground-Level Ozone in Thailand: Estimates Based on Surface Monitoring Measurements Data.

    PubMed

    Pinichka, Chayut; Bundhamcharoen, Kanitta; Shibuya, Kenji

    2015-05-14

    Ambient ozone (O3) pollution has increased globally since preindustrial times. At present, O3 is one of the major air pollution concerns in Thailand, and is associated with health impacts such as chronic obstructive pulmonary disease (COPD). The objective of our study is to estimate the burden of disease attributed to O3 in 2009 in Thailand based on empirical evidence. We estimated disability-adjusted life years (DALYs) attributable to O3 using the comparative risk assessment framework in the Global Burden of Diseases (GBD) study. We quantified the population attributable fraction (PAF), integrated from Geographic Information Systems (GIS)-based spatial interpolation, the population distribution of exposure, and the exposure-response coefficient to spatially characterize exposure to ambient O3 pollution on a national scale. Exposure distribution was derived from GIS-based spatial interpolation O3 exposure model using Pollution Control Department Thailand (PCD) surface air pollution monitor network sources. Relative risk (RR) and population attributable fraction (PAF) were determined using health impact function estimates for O3. PAF (%) of COPD attributable to O3 were determined by region: at approximately, Northern=2.1, Northeastern=7.1, Central=9.6, Eastern=1.75, Western=1.47 and Southern=1.74. The total COPD burden attributable to O3 for Thailand in 2009 was 61,577 DALYs. Approximately 0.6% of the total DALYs in Thailand is male: 48,480 DALYs; and female: 13,097 DALYs. This study provides the first empirical evidence on the health burden (DALYs) attributable to O3 pollution in Thailand. Varying across regions, the disease burden attributable to O3 was 0.6% of the total national burden in 2009. Better empirical data on local specific sites, e.g. urban and rural areas, alternative exposure assessment, e.g. land use regression (LUR), and a local concentration-response coefficient are required for future studies in Thailand.

  12. Gradient-based multiconfiguration Shepard interpolation for generating potential energy surfaces for polyatomic reactions.

    PubMed

    Tishchenko, Oksana; Truhlar, Donald G

    2010-02-28

    This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.

  13. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  14. Grundwasserfließgeschehen Mecklenburg-Vorpommerns - Geohydraulische Modellierung mit Detrended Kriging

    NASA Astrophysics Data System (ADS)

    Hilgert, Toralf; Hennig, Heiko

    2017-03-01

    Groundwater heads were mapped for the entire State of Mecklenburg-Western Pomerania by applying a Detrended Kriging method based on a numerical geohydraulic model. The general groundwater flow system (trend surface) was represented by a two-dimensional horizontal flow model. Thus deviations of observed groundwater heads from simulated groundwater heads are no longer subject to a regional trend and can be interpolated by means of Ordinary Kriging. Subsequently, the groundwater heads were obtained from the sum of the simulated trend surface and interpolated residuals. Furthermore, the described procedure allowed a plausibility check of observed groundwater heads by comparing them to results of the hydraulic model. If significant deviations were seen, the observation wells could be allocated to different aquifers. The final results are two hydraulically established groundwater head distributions - one for the regional main aquifer and one for the upper aquifer which may differ locally from the main aquifer.

  15. Spatial and spectral interpolation of ground-motion intensity measure observations

    USGS Publications Warehouse

    Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David

    2018-01-01

    Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.

  16. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  17. Cones in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Hantzsche, W.; Wendt, H.

    1947-01-01

    In the case of cones in axially symmetric flow of supersonic velocity, adiabatic compression takes place between shock wave and surface of the cone. Interpolation curves betwen shock polars and the surface are therefore necessary for the complete understanding of this type of flow. They are given in the present report by graphical-numerical integration of the differential equation for all cone angles and airspeeds.

  18. Impacts of urban and industrial development on Arctic land surface temperature in Lower Yenisei River Region.

    NASA Astrophysics Data System (ADS)

    Li, Z.; Shiklomanov, N. I.

    2015-12-01

    Urbanization and industrial development have significant impacts on arctic climate that in turn controls settlement patterns and socio-economic processes. In this study we have analyzed the anthropogenic influences on regional land surface temperature of Lower Yenisei River Region of the Russia Arctic. The study area covers two consecutive Landsat scenes and includes three major cities: Norilsk, Igarka and Dudingka. Norilsk industrial region is the largest producer of nickel and palladium in the world, and Igarka and Dudingka are important ports for shipping. We constructed a spatio-temporal interpolated temperature model by including 1km MODIS LST, field-measured climate, Modern Era Retrospective-analysis for Research and Applications (MERRA), DEM, Landsat NDVI and Landsat Land Cover. Those fore-mentioned spatial data have various resolution and coverage in both time and space. We analyzed their relationships and created a monthly spatio-temporal interpolated surface temperature model at 1km resolution from 1980 to 2010. The temperature model then was used to examine the characteristic seasonal LST signatures, related to several representative assemblages of Arctic urban and industrial infrastructure in order to quantify anthropogenic influence on regional surface temperature.

  19. A Computational Approach for Probabilistic Analysis of LS-DYNA Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2010-01-01

    NASA s development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. Because of the computational cost, these tools are often used to evaluate specific conditions and rarely used for statistical analysis. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. For this problem, response surface models are used to predict the system time responses to a water landing as a function of capsule speed, direction, attitude, water speed, and water direction. Furthermore, these models can also be used to ascertain the adequacy of the design in terms of probability measures. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  20. Analysis of rainfall distribution in Kelantan river basin, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Ros, Faizah; Tosaka, Hiroyuki

    2018-03-01

    Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.

  1. Spatial interpolation of pesticide drift from hand-held knapsack sprayers used in potato production

    NASA Astrophysics Data System (ADS)

    Garcia-Santos, Glenda; Pleschberger, Martin; Scheiber, Michael; Pilz, Jürgen

    2017-04-01

    Tropical mountainous regions in developing countries are often neglected in research and policy but represent key areas to be considered if sustainable agricultural and rural development is to be promoted. One example is the lack of information of pesticide drift soil deposition, which can support pesticide risk assessment for soil, surface water, bystanders and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, were obtained within one of the highest potato producing regions in Colombia. Based on the empirical data, different spatial interpolation techniques i.e. Thiessen, inverse distance squared weighting, co-kriging, pair-copulas and drift curves depending on distance and wind speed were tested and optimized. Results of the best performing spatial interpolation methods, suitable curves to assess mean relative drift and implications on risk assessment studies will be presented.

  2. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  3. The Surface Pressure Response of a NACA 0015 Airfoil Immersed in Grid Turbulence. Volume 1; Characteristics of the Turbulence

    NASA Technical Reports Server (NTRS)

    Bereketab, Semere; Wang, Hong-Wei; Mish, Patrick; Devenport, William J.

    2000-01-01

    Two grids have been developed for the Virginia Tech 6 ft x 6 ft Stability wind tunnel for the purpose of generating homogeneous isotropic turbulent flows for the study of unsteady airfoil response. The first, a square bi-planar grid with a 12" mesh size and an open area ratio of 69.4%, was mounted in the wind tunnel contraction. The second grid, a metal weave with a 1.2 in. mesh size and an open area ratio of 68.2% was mounted in the tunnel test section. Detailed statistical and spectral measurements of the turbulence generated by the two grids are presented for wind tunnel free stream speeds of 10, 20, 30 and 40 m/s. These measurements show the flows to be closely homogeneous and isotropic. Both grids produce flows with a turbulence intensity of about 4% at the location planned for the airfoil leading edge. Turbulence produced by the large grid has an integral scale of some 3.2 inches here. Turbulence produced by the small grid is an order of magnitude smaller. For wavenumbers below the upper limit of the inertial subrange, the spectra and correlations measured with both grids at all speeds can be represented using the von Karman interpolation formula with a single velocity and length scale. The spectra maybe accurately represented over the entire wavenumber range by a modification of the von Karman interpolation formula that includes the effects of dissipation. These models are most accurate at the higher speeds (30 and 40 m/s).

  4. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  5. Latent structure analysis of the process variables and pharmaceutical responses of an orally disintegrating tablet.

    PubMed

    Hayashi, Yoshihiro; Oshima, Etsuko; Maeda, Jin; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2012-01-01

    A multivariate statistical technique was applied to the design of an orally disintegrating tablet and to clarify the causal correlation among variables of the manufacturing process and pharmaceutical responses. Orally disintegrating tablets (ODTs) composed mainly of mannitol were prepared via the wet-granulation method using crystal transition from the δ to the β form of mannitol. Process parameters (water amounts (X(1)), kneading time (X(2)), compression force (X(3)), and amounts of magnesium stearate (X(4))) were optimized using a nonlinear response surface method (RSM) incorporating a thin plate spline interpolation (RSM-S). The results of a verification study revealed that the experimental responses, such as tensile strength and disintegration time, coincided well with the predictions. A latent structure analysis of the pharmaceutical formulations of the tablet performed using a Bayesian network led to the clear visualization of a causal connection among variables of the manufacturing process and tablet characteristics. The quantity of β-mannitol in the granules (Q(β)) was affected by X(2) and influenced all granule properties. The specific surface area of the granules was affected by X(1) and Q(β) and had an effect on all tablet characteristics. Moreover, the causal relationships among the variables were clarified by inferring conditional probability distributions. These techniques provide a better understanding of the complicated latent structure among variables of the manufacturing process and tablet characteristics.

  6. Bound state potential energy surface construction: ab initio zero-point energies and vibrationally averaged rotational constants.

    PubMed

    Bettens, Ryan P A

    2003-01-15

    Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.

  7. INTERPOLATING VANCOUVER'S DAILY AMBIENT PM 10 FIELD

    EPA Science Inventory

    In this article we develop a spatial predictive distribution for the ambient space- time response field of daily ambient PM10 in Vancouver, Canada. Observed responses have a consistent temporal pattern from one monitoring site to the next. We exploit this feature of the field b...

  8. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  9. WRF-Fire: coupled weather-wildland fire modeling with the weather research and forecasting model

    Treesearch

    Janice L. Coen; Marques Cameron; John Michalakes; Edward G. Patton; Philip J. Riggan; Kara M. Yedinak

    2012-01-01

    A wildland fire behavior module (WRF-Fire) was integrated into the Weather Research and Forecasting (WRF) public domain numerical weather prediction model. The fire module is a surface fire behavior model that is two-way coupled with the atmospheric model. Near-surface winds from the atmospheric model are interpolated to a finer fire grid and used, with fuel properties...

  10. Elliptic surface grid generation in three-dimensional space

    NASA Technical Reports Server (NTRS)

    Kania, Lee

    1992-01-01

    A methodology for surface grid generation in three dimensional space is described. The method solves a Poisson equation for each coordinate on arbitrary surfaces using successive line over-relaxation. The complete surface curvature terms were discretized and retained within the nonhomogeneous term in order to preserve surface definition; there is no need for conventional surface splines. Control functions were formulated to permit control of grid orthogonality and spacing. A method for interpolation of control functions into the domain was devised which permits their specification not only at the surface boundaries but within the interior as well. An interactive surface generation code which makes use of this methodology is currently under development.

  11. A Modified Kriging Method to Interpolate the Soil Moisture Measured by Wireless Sensor Network with the Aid of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Liu, Q.; Li, X.; Niu, H.; Cai, E.

    2015-12-01

    In recent years, wireless sensor network (WSN) emerges to collect Earth observation data at relatively low cost and light labor load, while its observations are still point-data. To learn the spatial distribution of a land surface parameter, interpolating the point data is necessary. Taking soil moisture (SM) for example, its spatial distribution is critical information for agriculture management, hydrological and ecological researches. This study developed a method to interpolate the WSN-measured SM to acquire the spatial distribution in a 5km*5km study area, located in the middle reaches of HEIHE River, western China. As SM is related to many factors such as topology, soil type, vegetation and etc., even the WSN observation grid is not dense enough to reflect the SM distribution pattern. Our idea is to revise the traditional Kriging algorithm, introducing spectral variables, i.e., vegetation index (VI) and abledo, from satellite imagery as supplementary information to aid the interpolation. Thus, the new Extended-Kriging algorithm operates on the spatial & spectral combined space. To run the algorithm, first we need to estimate the SM variance function, which is also extended to the combined space. As the number of WSN samples in the study area is not enough to gather robust statistics, we have to assume that the SM variance function is invariant over time. So, the variance function is estimated from a SM map, derived from the airborne CASI/TASI images acquired in July 10, 2012, and then applied to interpolate WSN data in that season. Data analysis indicates that the new algorithm can provide more details to the variation of land SM. Then, the Leave-one-out cross-validation is adopted to estimate the interpolation accuracy. Although a reasonable accuracy can be achieved, the result is not yet satisfactory. Besides improving the algorithm, the uncertainties in WSN measurements may also need to be controlled in our further work.

  12. Mapping Atmospheric Moisture Climatologies across the Conterminous United States

    PubMed Central

    Daly, Christopher; Smith, Joseph I.; Olson, Keith V.

    2015-01-01

    Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026

  13. Apparatus and method for measuring the thickness of a coating

    DOEpatents

    Carlson, Nancy M.; Johnson, John A.; Tow, David M.; Walter, John B

    2002-01-01

    An apparatus and method for measuring the thickness of a coating adhered to a substrate. An electromagnetic acoustic transducer is used to induce surface waves into the coating. The surface waves have a selected frequency and a fixed wavelength. Interpolation is used to determine the frequency of surface waves that propagate through the coating with the least attenuation. The phase velocity of the surface waves having this frequency is then calculated. The phase velocity is compared to known phase velocity/thickness tables to determine the thickness of the coating.

  14. The ARM Best Estimate Station-based Surface (ARMBESTNS) Data set

    DOE Data Explorer

    Qi,Tang; Xie,Shaocheng

    2015-08-06

    The ARM Best Estimate Station-based Surface (ARMBESTNS) data set merges together key surface measurements from the Southern Great Plains (SGP) sites. It is a twin data product of the ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set. Unlike the 2DGRID data set, the STNS data are reported at the original site locations and show the original information, except for the interpolation over time. Therefore, users have the flexibility to process the data with the approach more suitable for their applications.

  15. Ge growth on vicinal si(001) surfaces: island's shape and pair interaction versus miscut angle.

    PubMed

    Persichetti, L; Sgarlata, A; Fanfoni, M; Balzarotti, A

    2011-10-01

    A complete description of Ge growth on vicinal Si(001) surfaces is provided. The distinctive mechanisms of the epitaxial growth process on vicinal surfaces are clarified from the very early stages of Ge deposition to the nucleation of 3D islands. By interpolating high-resolution scanning tunneling microscopy measurements with continuum elasticity modeling, we assess the dependence of island's shape and elastic interaction on the substrate misorientation. Our results confirm that vicinal surfaces offer an additional degree of control over the shape and symmetry of self-assembled nanostructures.

  16. Using optimal interpolation to assimilate surface measurements and satellite AOD for ozone and PM2.5: A case study for July 2011.

    PubMed

    Tang, Youhua; Chai, Tianfeng; Pan, Li; Lee, Pius; Tong, Daniel; Kim, Hyun-Cheol; Chen, Weiwei

    2015-10-01

    We employed an optimal interpolation (OI) method to assimilate AIRNow ozone/PM2.5 and MODIS (Moderate Resolution Imaging Spectroradiometer) aerosol optical depth (AOD) data into the Community Multi-scale Air Quality (CMAQ) model to improve the ozone and total aerosol concentration for the CMAQ simulation over the contiguous United States (CONUS). AIRNow data assimilation was applied to the boundary layer, and MODIS AOD data were used to adjust total column aerosol. Four OI cases were designed to examine the effects of uncertainty setting and assimilation time; two of these cases used uncertainties that varied in time and location, or "dynamic uncertainties." More frequent assimilation and higher model uncertainties pushed the modeled results closer to the observation. Our comparison over a 24-hr period showed that ozone and PM2.5 mean biases could be reduced from 2.54 ppbV to 1.06 ppbV and from -7.14 µg/m³ to -0.11 µg/m³, respectively, over CONUS, while their correlations were also improved. Comparison to DISCOVER-AQ 2011 aircraft measurement showed that surface ozone assimilation applied to the CMAQ simulation improves regional low-altitude (below 2 km) ozone simulation. This paper described an application of using optimal interpolation method to improve the model's ozone and PM2.5 estimation using surface measurement and satellite AOD. It highlights the usage of the operational AIRNow data set, which is available in near real time, and the MODIS AOD. With a similar method, we can also use other satellite products, such as the latest VIIRS products, to improve PM2.5 prediction.

  17. A new solution-adaptive grid generation method for transonic airfoil flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.; Holst, T. L.

    1981-01-01

    The clustering algorithm is controlled by a second-order, ordinary differential equation which uses the airfoil surface density gradient as a forcing function. The solution to this differential equation produces a surface grid distribution which is automatically clustered in regions with large gradients. The interior grid points are established from this surface distribution by using an interpolation scheme which is fast and retains the desirable properties of the original grid generated from the standard elliptic equation approach.

  18. Spatial interpolation of hourly precipitation and dew point temperature for the identification of precipitation phase and hydrologic response in a mountainous catchment

    NASA Astrophysics Data System (ADS)

    Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.

    2012-12-01

    In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing so to support an accurate determination of precipitation phase for assessing catchment hydrologic response to the storm.

  19. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  20. Evolution of Western Mediterranean Sea Surface Temperature between 1985 and 2005: a complementary study in situ, satellite and modelling approaches

    NASA Astrophysics Data System (ADS)

    Troupin, C.; Lenartz, F.; Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Ouberdous, M.; Beckers, J.-M.

    2009-04-01

    In order to evaluate the variability of the sea surface temperature (SST) in the Western Mediterranean Sea between 1985 and 2005, an integrated approach combining geostatistical tools and modelling techniques has been set up. The objectives are: underline the capability of each tool to capture characteristic phenomena, compare and assess the quality of their outputs, infer an interannual trend from the results. Diva (Data Interpolating Variationnal Analysis, Brasseur et al. (1996) Deep-Sea Res.) was applied on a collection of in situ data gathered from various sources (World Ocean Database 2005, Hydrobase2, Coriolis and MedAtlas2), from which duplicates and suspect values were removed. This provided monthly gridded fields in the region of interest. Heterogeneous time data coverage was taken into account by computing and removing the annual trend, provided by Diva detrending tool. Heterogeneous correlation length was applied through an advection constraint. Statistical technique DINEOF (Data Interpolation with Empirical Orthogonal Functions, Alvera-Azc

  1. New Mass-Conserving Bedrock Topography for Pine Island Glacier Impacts Simulated Decadal Rates of Mass Loss

    NASA Astrophysics Data System (ADS)

    Nias, I. J.; Cornford, S. L.; Payne, A. J.

    2018-04-01

    High-resolution ice flow modeling requires bedrock elevation and ice thickness data, consistent with one another and with modeled physics. Previous studies have shown that gridded ice thickness products that rely on standard interpolation techniques (such as Bedmap2) can be inconsistent with the conservation of mass, given observed velocity, surface elevation change, and surface mass balance, for example, near the grounding line of Pine Island Glacier, West Antarctica. Using the BISICLES ice flow model, we compare results of simulations using both Bedmap2 bedrock and thickness data, and a new interpolation method that respects mass conservation. We find that simulations using the new geometry result in higher sea level contribution than Bedmap2 and reveal decadal-scale trends in the ice stream dynamics. We test the impact of several sliding laws and find that it is at least as important to accurately represent the bedrock and initial ice thickness as the choice of sliding law.

  2. Estimating Small-area Populations by Age and Sex Using Spatial Interpolation and Statistical Inference Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L

    The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, andmore » a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.« less

  3. Assessment of Potential Location of High Arsenic Contamination Using Fuzzy Overlay and Spatial Anisotropy Approach in Iron Mine Surrounding Area

    PubMed Central

    Wirojanagud, Wanpen; Srisatit, Thares

    2014-01-01

    Fuzzy overlay approach on three raster maps including land slope, soil type, and distance to stream can be used to identify the most potential locations of high arsenic contamination in soils. Verification of high arsenic contamination was made by collection samples and analysis of arsenic content and interpolation surface by spatial anisotropic method. A total of 51 soil samples were collected at the potential contaminated location clarified by fuzzy overlay approach. At each location, soil samples were taken at the depth of 0.00-1.00 m from the surface ground level. Interpolation surface of the analysed arsenic content using spatial anisotropic would verify the potential arsenic contamination location obtained from fuzzy overlay outputs. Both outputs of the spatial surface anisotropic and the fuzzy overlay mapping were significantly spatially conformed. Three contaminated areas with arsenic concentrations of 7.19 ± 2.86, 6.60 ± 3.04, and 4.90 ± 2.67 mg/kg exceeded the arsenic content of 3.9 mg/kg, the maximum concentration level (MCL) for agricultural soils as designated by Office of National Environment Board of Thailand. It is concluded that fuzzy overlay mapping could be employed for identification of potential contamination area with the verification by surface anisotropic approach including intensive sampling and analysis of the substances of interest. PMID:25110751

  4. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  5. Suitability of satellite derived and gridded sea surface temperature data sets for calibrating high-resolution marine proxy records

    NASA Astrophysics Data System (ADS)

    Ouellette, G., Jr.; DeLong, K. L.

    2016-02-01

    High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.

  6. Interpolation of scattered temperature data measurements onto a worldwide regular grid using radial basis functions with applications to global warming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.

    1994-05-01

    Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less

  7. Gonioreflectometric properties of metal surfaces

    NASA Astrophysics Data System (ADS)

    Jaanson, P.; Manoocheri, F.; Mäntynen, H.; Gergely, M.; Widlowski, J.-L.; Ikonen, E.

    2014-12-01

    Angularly resolved measurements of scattered light from surfaces can provide useful information in various fields of research and industry, such as computer graphics, satellite based Earth observation etc. In practice, empirical or physics-based models are needed to interpolate the measurement results, because a thorough characterization of the surfaces under all relevant conditions may not be feasible. In this work, plain and anodized metal samples were prepared and measured optically for bidirectional reflectance distribution function (BRDF) and mechanically for surface roughness. Two models for BRDF (Torrance-Sparrow model and a polarimetric BRDF model) were fitted to the measured values. A better fit was obtained for plain metal surfaces than for anodized surfaces.

  8. Resolution-independent surface rendering using programmable graphics hardware

    DOEpatents

    Loop, Charles T.; Blinn, James Frederick

    2008-12-16

    Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.

  9. Surface reconstruction and deformation monitoring of stratospheric airship based on laser scanning technology

    NASA Astrophysics Data System (ADS)

    Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei

    2018-04-01

    Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.

  10. Interpolation Environment of Tensor Mathematics at the Corpuscular Stage of Computational Experiments in Hydromechanics

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia

    2018-02-01

    Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.

  11. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  12. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  13. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  14. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

    NASA Technical Reports Server (NTRS)

    Allred, Joel

    2012-01-01

    Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

  15. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 4; Determination of surface and atmosphere fluxes and temporally and spatially averaged products (subsystems 5-12); Determination of surface and atmosphere fluxes and temporally and spatially averaged products

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.

  16. A General Surface Representation Module Designed for Geodesy

    DTIC Science & Technology

    1980-06-01

    one considers as a reasonable interpolation function, one of the often accepted compromises is the choice q = 2 (Schumnaker, 1976, Bybee and Bedross...Fast Fourier Transform: Englewood Cliffs, New Jersey. Bybee , J.E. and G.M. Bedross (1978): The IPIN computer network control softward. In: Proceedings

  17. Application of Geographic Information System Methods to Identify Areas Yielding Water that will be Replaced by Water from the Colorado River in the Vidal and Chemehuevi Areas, California, and the Mohave Mesa Area, Arizona

    USGS Publications Warehouse

    Spangler, Lawrence E.; Angeroth, Cory E.; Walton, Sarah J.

    2008-01-01

    Relations between the elevation of the static water level in wells and the elevation of the accounting surface within the Colorado River aquifer in the vicinity of Vidal, California, the Chemehuevi Indian Reservation, California, and on Mohave Mesa, Arizona, were used to determine which wells outside the flood plain of the Colorado River are presumed to yield water that will be replaced by water from the Colorado River. Wells that have a static water-level elevation equal to or below the elevation of the accounting surface are presumed to yield water that will be replaced by water from the Colorado River. Geographic Information System (GIS) interpolation tools were used to produce maps of areas where water levels are above, below, and near (within ? 0.84 foot) the accounting surface. Calculated water-level elevations and interpolated accounting-surface elevations were determined for 33 wells in the vicinity of Vidal, 16 wells in the Chemehuevi area, and 35 wells on Mohave Mesa. Water-level measurements generally were taken in the last 10 years with steel and electrical tapes accurate to within hundredths of a foot. A Differential Global Positioning System (DGPS) was used to determine land-surface elevations to within an operational accuracy of ? 0.43 foot, resulting in calculated water-level elevations having a 95-percent confidence interval of ? 0.84 foot. In the Vidal area, differences in elevation between the accounting surface and measured water levels range from -2.7 feet below to as much as 17.6 feet above the accounting surface. Relative differences between the elevation of the water level and the elevation of the accounting surface decrease from west to east and from north to south. In the Chemehuevi area, differences in elevation range from -3.7 feet below to as much as 8.7 feet above the accounting surface, which is established at 449.6 feet in the vicinity of Lake Havasu. In all of the Mohave Mesa area, the water-level elevation is near or below the elevation of the accounting surface. Differences in elevation between water levels and the accounting surface range from -0.2 to -11.3 feet, with most values exceeding -7.0 feet. In general, the ArcGIS Triangulated Irregular Network (TIN) Contour and Natural Neighbor tools reasonably represent areas where the elevation of water levels in wells is above, below, and near (within ? 0.84 foot) the elevation of the accounting surface in the Vidal and Chemehuevi study areas and accurately delineate areas around outlying wells and where anomalies exist. The TIN Contour tool provides a strict linear interpolation while the Natural Neighbor tool provides a smoothed interpolation. Using the default options in ArcGIS, the Inverse Distance Weighted (IDW) and Spline tools also reasonably represent areas above, below, and near the accounting surface in the Vidal and Chemehuevi areas. However, spatial extent of and boundaries between areas above, below, and near the accounting surface vary among the GIS methods, which results largely from the fundamentally different mathematical approaches used by these tools. The limited number and spatial distribution of wells in comparison to the size of the areas, and the locations and relative differences in elevation between water levels and the accounting surface of wells with anomalous water levels also influence the contouring by each of these methods. Qualitatively, the Natural Neighbor tool appears to provide the best representation of the difference between water-level and accounting-surface elevations in the study areas, on the basis of available well data.

  18. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  19. An Unconditionally Stable Fully Conservative Semi-Lagrangian Method (PREPRINT)

    DTIC Science & Technology

    2010-08-07

    Alessandrini. An Hamiltonian interface SPH formulation for multi-fluid and free surface flows . J. of Comput. Phys., 228(22):8380–8393, 2009. [11] J.T...and J. Welch. Numerical Calculation of Time-Dependent Viscous Incompressible Flow of Fluid with Free Surface . Phys. Fluids, 8:2182–2189, 1965. [14... flow is divergence free , one would generally expect these lines to be commensurate, however, due to numerical errors in interpolation there is some

  20. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  1. Structural-Thermal-Optical Program (STOP)

    NASA Technical Reports Server (NTRS)

    Lee, H. P.

    1972-01-01

    A structural thermal optical computer program is developed which uses a finite element approach and applies the Ritz method for solving heat transfer problems. Temperatures are represented at the vertices of each element and the displacements which yield deformations at any point of the heated surface are interpolated through grid points.

  2. On the Application of a Response Surface Technique to Analyze Roll-over Stability of Capsules with Airbags Using LS-Dyna

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2008-01-01

    As NASA moves towards developing technologies needed to implement its new Exploration program, studies conducted for Apollo in the 1960's to understand the rollover stability of capsules landing are being revisited. Although rigid body kinematics analyses of the roll-over behavior of capsules on impact provided critical insight to the Apollo problem, extensive ground test programs were also used. For the new Orion spacecraft being developed to implement today's Exploration program, new air-bag designs have improved sufficiently for NASA to consider their use to mitigate landing loads to ensure crew safety and to enable re-usability of the capsule. Simple kinematics models provide only limited understanding of the behavior of these air bag systems, and more sophisticated tools must be used. In particular, NASA and its contractors are using the LS-Dyna nonlinear simulation code for impact response predictions of the full Orion vehicle with air bags by leveraging the extensive air bag prediction work previously done by the automotive industry. However, even in today's computational environment, these analyses are still high-dimensional, time consuming, and computationally intensive. To alleviate the computational burden, this paper presents an approach that uses deterministic sampling techniques and an adaptive response surface method to not only use existing LS-Dyna solutions but also to interpolate from LS-Dyna solutions to predict the stability boundaries for a capsule on airbags. Results for the stability boundary in terms of impact velocities, capsule attitude, impact plane orientation, and impact surface friction are discussed.

  3. Stellar mass and age determinations . I. Grids of stellar models from Z = 0.006 to 0.04 and M = 0.5 to 3.5 M⊙

    NASA Astrophysics Data System (ADS)

    Mowlavi, N.; Eggenberger, P.; Meynet, G.; Ekström, S.; Georgy, C.; Maeder, A.; Charbonnel, C.; Eyer, L.

    2012-05-01

    Aims: We present dense grids of stellar models suitable for comparison with observable quantities measured with great precision, such as those derived from binary systems or planet-hosting stars. Methods: We computed new Geneva models without rotation at metallicities Z = 0.006, 0.01, 0.014, 0.02, 0.03, and 0.04 (i.e. [Fe/H] from -0.33 to +0.54) and with mass in small steps from 0.5 to 3.5 M⊙. Great care was taken in the procedure for interpolating between tracks in order to compute isochrones. Results: Several properties of our grids are presented as a function of stellar mass and metallicity. Those include surface properties in the Hertzsprung-Russell diagram, internal properties including mean stellar density, sizes of the convective cores, and global asteroseismic properties. Conclusions: We checked our interpolation procedure and compared interpolated tracks with computed tracks. The deviations are less than 1% in radius and effective temperatures for most of the cases considered. We also checked that the present isochrones provide nice fits to four couples of observed detached binaries and to the observed sequences of the open clusters NGC 3532 and M 67. Including atomic diffusion in our models with M < 1.1 M⊙ leads to variations in the surface abundances that should be taken into account when comparing with observational data of stars with measured metallicities. For that purpose, iso-Zsurf lines are computed. These can be requested for download from a dedicated web page, together with tracks at masses and metallicities within the limits covered by the grids. The validity of the relations linking Z and [Fe/H] is also re-assessed in light of the surface abundance variations in low-mass stars. Table D.1 for the basic tracks is available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/541/A41, and on our web site http://obswww.unige.ch/Recherche/evol/-Database-. Tables for interpolated tracks, iso-Zsurf lines and isochrones can be computed, on demand, from our web site.Appendices are available in electronic form at http://www.aanda.org

  4. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  5. The algorithmic details of polynomials application in the problems of heat and mass transfer control on the hypersonic aircraft permeable surfaces

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2018-03-01

    The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.

  6. Optimizing weather radar observations using an adaptive multiquadric surface fitting algorithm

    NASA Astrophysics Data System (ADS)

    Martens, Brecht; Cabus, Pieter; De Jongh, Inge; Verhoest, Niko

    2013-04-01

    Real time forecasting of river flow is an essential tool in operational water management. Such real time modelling systems require well calibrated models which can make use of spatially distributed rainfall observations. Weather radars provide spatial data, however, since radar measurements are sensitive to a large range of error sources, often a discrepancy between radar observations and ground-based measurements, which are mostly considered as ground truth, can be observed. Through merging ground observations with the radar product, often referred to as data merging, one may force the radar observations to better correspond to the ground-based measurements, without losing the spatial information. In this paper, radar images and ground-based measurements of rainfall are merged based on interpolated gauge-adjustment factors (Moore et al., 1998; Cole and Moore, 2008) or scaling factors. Using the following equation, scaling factors (C(xα)) are calculated at each position xα where a gauge measurement (Ig(xα)) is available: Ig(xα)+-? C (xα) = Ir(xα)+ ? (1) where Ir(xα) is the radar-based observation in the pixel overlapping the rain gauge and ? is a constant making sure the scaling factor can be calculated when Ir(xα) is zero. These scaling factors are interpolated on the radar grid, resulting in a unique scaling factor for each pixel. Multiquadric surface fitting is used as an interpolation algorithm (Hardy, 1971): C*(x0) = aTv + a0 (2) where C*(x0) is the prediction at location x0, the vector a (Nx1, with N the number of ground-based measurements used) and the constant a0 parameters describing the surface and v an Nx1 vector containing the (Euclidian) distance between each point xα used in the interpolation and the point x0. The parameters describing the surface are derived by forcing the surface to be an exact interpolator and impose that the sum of the parameters in a should be zero. However, often, the surface is allowed to pass near the observations (i.e. the observed scaling factors C(xα)) on a distance aαK by introducing an offset parameter K, which results in slightly different equations to calculate a and a0. The described technique is currently being used by the Flemish Environmental Agency in an online forecasting system of river discharges within Flanders (Belgium). However, rescaling the radar data using the described algorithm is not always giving rise to an improved weather radar product. Probably one of the main reasons is the parameters K and ? which are implemented as constants. It can be expected that, among others, depending on the characteristics of the rainfall, different values for the parameters should be used. Adaptation of the parameter values is achieved by an online calibration of K and ? at each time step (every 15 minutes), using validated rain gauge measurements as ground truth. Results demonstrate that rescaling radar images using optimized values for K and ? at each time step lead to a significant improvement of the rainfall estimation, which in turn will result in higher quality discharge predictions. Moreover, it is shown that calibrated values for K and ? can be obtained in near-real time. References Cole, S. J., and Moore, R. J. (2008). Hydrological modelling using raingauge- and radar-based estimators of areal rainfall. Journal of Hydrology, 358(3-4), 159-181. Hardy, R.L., (1971) Multiquadric equations of topography and other irregular surfaces, Journal of Geophysical Research, 76(8): 1905-1915. Moore, R. J., Watson, B. C., Jones, D. A. and Black, K. B. (1989). London weather radar local calibration study. Technical report, Institute of Hydrology.

  7. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  8. Objective Assessment of Groundwater Resources for the Isle of Wight, UK

    NASA Astrophysics Data System (ADS)

    Simpson, M.; Butler, A. P.; McIntyre, N.

    2012-12-01

    Water resources are of crucial importance to the UK, and are essential to agriculture and industry as well as for domestic usage. A combination of factors - population growth, climate change and increasing regulatory restriction - will alter water availability over the next fifty years, potentially leading to shortages. Groundwater systems, representing 60% of available water in the southern UK, are typically conceptualised through geological interpretation, resulting in skilful and widely used systems models. Where these models are not successful, alternative approaches to groundwater resource assessments are required. A process for objectively modelling groundwater systems from borehole level observation data is presented, along with a methodology for incorporating climate change and population growth forecasts into an assessment of future water availability. The objective assessment described is a three stage process. Firstly, the observation data can be associated into groups, in order to best represent the water table response and thus identify units with consistent storage coefficient and transport process, analogous to varying aquifer media. Here this is achieved through the use of cluster analysis, applying various distance metrics to observation variances and incorporating spatial displacement. Secondly, the resulting groups are used to generate interpolated surfaces of mean water table level. A series of interpolative methods, including stochastic approaches, are applied and cross-validated to generate the most credible groundwater surface. Thirdly, the presence and influence of spatial and temporal anomalies such as groundwater abstraction points are identified through examination of observations furthest from the final interpolated groundwater surface. This groundwater systems model can then be incorporated into a groundwater/surface model relating rainfall to water table level and river flow, which in turn is used in conjunction with state-of-the-art climate projections and population change forecasts to generate projections of change for future scenarios. A case study illustrating the use of this methodology is presented. The Isle of Wight, a small island off the south coast of England with a population of 140,000, is exceptionally water stressed by UK standards and currently dependent on mainland transfers. These transfers are under review due to environmental pressure on the neighbouring county of Hampshire. Alongside this, a high population growth rate and the southerly location make the Isle of Wight the most likely part of the UK to be affected by reduced water availability. Lower Greensands and Chalk aquifer systems provide most of the island's non-imported water, yet these remain poorly understood. The principle reason for this is the exceptionally complex geology, a result of the island's location at the convergence of two asymmetric anticlinal structures and, in the case of the Lower Greensand group, multiple layers of alternating aquiferous and non-aquiferous material. A statistical assessment of borehole data may provide evidence for groundwater processes where direct lithological analysis is not appropriate. Through development of a credible groundwater systems model and sensible projections of future water availability, critical information can be provided to influence future water management policy.

  9. Evaluating Surface Flux Results from CERES-FLASHFlux

    NASA Technical Reports Server (NTRS)

    Wilber, Anne C.; Stackhouse, Paul W., Jr.; Kratz, David P.; Gupta, Shashi K.; Sawaengphokhai, Parnchai K.

    2015-01-01

    The Fast Longwave and Shortwave Radiative Flux (FLASHFlux) data product was developed to provide a rapid release version of the Clouds and Earth's Radiant Energy System (CERES) results, which could be made available to the research and applications communities within one week of the satellite observations by exchanging some accuracy for speed of processing. Unlike standard CERES products, FLASHFlux does not maintain a long-term consistent record. Therefore the latest algorithm changes and input data can be incorporated into processing. FLASHFlux released Version3A (January 2013) and Version 3B (August 2014) which include the latest meteorological product from Global Modeling and Assimilation Office (GMAO), GEOS FP-IT (5.9.1), the latest spectral response functions and gains for the CERES instruments, and aerosol climatology based on the latest MATCH data. Version 3B included a slightly updated calibration and some changes to the surface albedo over snow/ice. Typically FLASHFlux does not reprocess earlier versions when a new version is released. The combined record of Time Interpolated Space Averaged (TISA) surface flux results from Versions3A and 3B for July 2012 to October 2015 have been compared to the ground-based measurements. The FLASHFlux results are also compared to two other CERES gridded products, SYN1deg and EBAF surface fluxes.

  10. Spatial response surface modelling in the presence of data paucity for the evaluation of potential human health risk due to the contamination of potable water resources.

    PubMed

    Liu, Shen; McGree, James; Hayes, John F; Goonetilleke, Ashantha

    2016-10-01

    Potential human health risk from waterborne diseases arising from unsatisfactory performance of on-site wastewater treatment systems is driven by landscape factors such as topography, soil characteristics, depth to water table, drainage characteristics and the presence of surface water bodies. These factors are present as random variables which are spatially distributed across a region. A methodological framework is presented that can be applied to model and evaluate the influence of various factors on waterborne disease potential. This framework is informed by spatial data and expert knowledge. For prediction at unsampled sites, interpolation methods were used to derive a spatially smoothed surface of disease potential which takes into account the uncertainty due to spatial variation at any pre-determined level of significance. This surface was constructed by accounting for the influence of multiple variables which appear to contribute to disease potential. The framework developed in this work strengthens the understanding of the characteristics of disease potential and provides predictions of this potential across a region. The study outcomes presented constitutes an innovative approach to environmental monitoring and management in the face of data paucity. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  12. Monitoring global snow cover

    NASA Technical Reports Server (NTRS)

    Armstrong, Richard; Hardman, Molly

    1991-01-01

    A snow model that supports the daily, operational analysis of global snow depth and age has been developed. It provides improved spatial interpolation of surface reports by incorporating digital elevation data, and by the application of regionalized variables (kriging) through the use of a global snow depth climatology. Where surface observations are inadequate, the model applies satellite remote sensing. Techniques for extrapolation into data-void mountain areas and a procedure to compute snow melt are also contained in the model.

  13. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology

    NASA Technical Reports Server (NTRS)

    Allen, P. A.; Wells, D. N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  14. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  15. EOS Interpolation and Thermodynamic Consistency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammel, J. Tinka

    2015-11-16

    As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.

  16. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    PubMed

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  17. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  18. White Sands Missile Range Main Cantonment and NASA Area Faults, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, Greg

    This is a zipped ArcGIS shapefile containing faults mapped for the Tularosa Basin geothermal play fairway analysis project. The faults were interpolated from gravity and seismic (NASA area) data, and from geomorphic features on aerial photography. Field work was also done for validation of faults which had surface expressions.

  19. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015.

    PubMed

    Abatzoglou, John T; Dobrowski, Solomon Z; Parks, Sean A; Hegewisch, Katherine C

    2018-01-09

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  20. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    NASA Astrophysics Data System (ADS)

    Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  1. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  2. Milky Way Mass Models and MOND

    NASA Astrophysics Data System (ADS)

    McGaugh, Stacy S.

    2008-08-01

    Using the Tuorla-Heidelberg model for the mass distribution of the Milky Way, I determine the rotation curve predicted by MOND (modified Newtonian dynamics). The result is in good agreement with the observed terminal velocities interior to the solar radius and with estimates of the Galaxy's rotation curve exterior thereto. There are no fit parameters: given the mass distribution, MOND provides a good match to the rotation curve. The Tuorla-Heidelberg model does allow for a variety of exponential scale lengths; MOND prefers short scale lengths in the range 2.0 kpc lesssim Rdlesssim 2.5 kpc. The favored value of Rd depends somewhat on the choice of interpolation function. There is some preference for the "simple" interpolation function as found by Famaey & Binney. I introduce an interpolation function that shares the advantages of the simple function on galaxy scales while having a much smaller impact in the solar system. I also solve the inverse problem, inferring the surface mass density distribution of the Milky Way from the terminal velocities. The result is a Galaxy with "bumps and wiggles" in both its luminosity profile and rotation curve that are reminiscent of those frequently observed in external galaxies.

  3. Interpolated Sounding and Gridded Sounding Value-Added Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toto, T.; Jensen, M.

    Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less

  4. Spatial Estimation of Sub-Hour Global Horizontal Irradiance Based on Official Observations and Remote Sensors

    PubMed Central

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-01-01

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102

  5. Spatial estimation of sub-hour Global Horizontal Irradiance based on official observations and remote sensors.

    PubMed

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-04-11

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).

  6. Holes in the ocean: Filling voids in bathymetric lidar data

    NASA Astrophysics Data System (ADS)

    Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie

    2011-04-01

    The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.

  7. Moho map of South America from receiver functions and surface waves

    NASA Astrophysics Data System (ADS)

    Lloyd, Simon; van der Lee, Suzan; FrançA, George Sand; AssumpçãO, Marcelo; Feng, Mei

    2010-11-01

    We estimate crustal structure and thickness of South America north of roughly 40°S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson's ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.

  8. Extracting Hydrologic Understanding from the Unique Space-time Sampling of the Surface Water and Ocean Topography (SWOT) Mission

    NASA Astrophysics Data System (ADS)

    Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.

    2017-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.

  9. Optimum interpolation analysis of basin-scale ¹³⁷Cs transport in surface seawater in the North Pacific Ocean.

    PubMed

    Inomata, Y; Aoyama, M; Tsumune, D; Motoi, T; Nakano, H

    2012-12-01

    ¹³⁷Cs is one of the conservative tracers applied to the study of oceanic circulation processes on decadal time scales. To investigate the spatial distribution and the temporal variation of ¹³⁷Cs concentrations in surface seawater in the North Pacific Ocean after 1957, a technique for optimum interpolation (OI) was applied to understand the behaviour of ¹³⁷Cs that revealed the basin-scale circulation of Cs ¹³⁷Cs in surface seawater in the North Pacific Ocean: ¹³⁷Cs deposited in the western North Pacific Ocean from global fallout (late 1950s and early 1960s) and from local fallout (transported from the Bikini and Enewetak Atolls during the late 1950s) was further transported eastward with the Kuroshio and North Pacific Currents within several years of deposition and was accumulated in the eastern North Pacific Ocean until 1967. Subsequently, ¹³⁷Cs concentrations in the eastern North Pacific Ocean decreased due to southward transport. Less radioactively contaminated seawater was also transported northward, upstream of the North Equatorial Current in the western North Pacific Ocean in the 1970s, indicating seawater re-circulation in the North Pacific Gyre.

  10. Calculating the surface tension of binary solutions of simple fluids of comparable size

    NASA Astrophysics Data System (ADS)

    Zaitseva, E. S.; Tovbin, Yu. K.

    2017-11-01

    A molecular theory based on the lattice gas model (LGM) is used to calculate the surface tension of one- and two-component planar vapor-liquid interfaces of simple fluids. Interaction between nearest neighbors is considered in the calculations. LGM is applied as a tool of interpolation: the parameters of the model are corrected using experimental surface tension data. It is found that the average accuracy of describing the surface tension of pure substances (Ar, N2, O2, CH4) and their mixtures (Ar-O2, Ar-N2, Ar-CH4, N2-CH4) does not exceed 2%.

  11. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  12. Computational theory of line drawing interpretation

    NASA Technical Reports Server (NTRS)

    Witkin, A. P.

    1981-01-01

    The recovery of the three dimensional structure of visible surfaces depicted in an image by emphasizing the role of geometric cues present in line drawings, was studied. Three key components are line classification, line interpretation, and surface interpolation. A model for three dimensional line interpretation and surface orientation was refined and a theory for the recovery of surface shape from surface marking geometry was developed. A new approach to the classification of edges was developed and implemented signatures were deduced for each of several edge types, expressed in terms of correlational properties of the image intensities in the vicinity of the edge. A computer program was developed that evaluates image edges as compared with these prototype signatures.

  13. The Role of Amodal Surface Completion in Stereoscopic Transparency

    PubMed Central

    Anderson, Barton L.; Schmid, Alexandra C.

    2012-01-01

    Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829

  14. Effects of deterministic surface distortions on reflector antenna performance

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    Systematic distortions of reflector antenna surfaces can cause antenna radiation patterns to be undesirably different from those of perfectly smooth reflector surfaces. In this paper, a simulation model for systematic distortions is described which permits an efficient computation of the effects of distortions in the reflector pattern. The model uses a vector diffraction physical optics analysis for the determination of both the co-polar and cross-polar fields. An interpolation scheme is also presented for the description of reflector surfaces which are prescribed by discrete points. Representative numerical results are presented for reflectors with sinusoidally and thermally distorted surfaces. Finally, comparisons are made between the measured and calculated patterns of a slowly-varying distorted offset parabolic reflector.

  15. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  16. Numerical study of time domain analogy applied to noise prediction from rotating blades

    NASA Astrophysics Data System (ADS)

    Fedala, D.; Kouidri, S.; Rey, R.

    2009-04-01

    Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.

  17. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  18. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  19. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  20. Geostatistical interpolation of individual average monthly temperature supported by MODIS MOD11C3 product

    NASA Astrophysics Data System (ADS)

    Perčec Tadić, M.

    2010-09-01

    The increased availability of satellite products of high spatial and temporal resolution together with developing user support, encourages the climatologists to use this data in research and practice. Since climatologists are mainly interested in monthly or even annual averages or aggregates, this high temporal resolution and hence, large amount of data, can be challenging for the less experienced users. Even if the attempt is made to aggregate e. g. the 15' (temporal) MODIS LST (land surface temperature) to daily temperature average, the development of the algorithm is not straight forward and should be done by the experts. Recent development of many temporary aggregated products on daily, several days or even monthly scale substantially decrease the amount of satellite data that needs to be processed and rise the possibility for development of various climatological applications. Here the attempt is presented in incorporating the MODIS satellite MOD11C3 product (Wan, 2009), that is monthly CMG (climate modelling 0.05 degree latitude/longitude grids) LST, as predictor in geostatistical interpolation of climatological data in Croatia. While in previous applications, e. g. in Climate Atlas of Croatia (Zaninović et al. 2008), the static predictors as digital elevation model, distance to the sea, latitude and longitude were used for the interpolation of monthly, seasonal and annual 30-years averages (reference climatology), here the monthly MOD11C3 is used to support the interpolation of the individual monthly average in the regression kriging framework. We believe that this can be a valuable show case of incorporating the remote sensed data for climatological application, especially in the areas that are under-sampled by conventional observations. Zaninović K, Gajić-Čapka M, Perčec Tadić M et al (2008) Klimatski atlas Hrvatske / Climate atlas of Croatia 1961-1990, 1971-2000. Meteorological and Hydrological Service of Croatia, Zagreb, pp 200. Wan Z, 2009: Collection-5 MODIS Land Surface Temperature Products Users' Guide, ICESS, University of California, Santa Barbara, pp 30.

  1. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  2. Quasiclassical trajectory study of the Cl+CH4 reaction dynamics on a quadratic configuration interaction with single and double excitation interpolated potential energy surface.

    PubMed

    Castillo, J F; Aoiz, F J; Bañares, L

    2006-09-28

    An ab initio interpolated potential energy surface (PES) for the Cl+CH(4) reactive system has been constructed using the interpolation method of Collins and co-workers [J. Chem. Phys. 102, 5647 (1995); 108, 8302 (1998); 111, 816 (1999); Theor. Chem. Acc. 108, 313 (2002)]. The ab initio calculations have been performed using quadratic configuration interaction with single and double excitation theory to build the PES. A simple scaling all correlation technique has been used to obtain a PES which yields a barrier height and reaction energy in good agreement with high level ab initio calculations and experimental measurements. Using these interpolated PESs, a detailed quasiclassical trajectory study of integral and differential cross sections, product rovibrational populations, and internal energy distributions has been carried out for the Cl+CH(4) and Cl+CD(4) reactions, and the theoretical results have been compared with the available experimental data. It has been shown that the calculated total reaction cross sections versus collision energy for the Cl+CH(4) and Cl+CD(4) reactions is very sensitive to the barrier height. Besides, due to the zero-point energy (ZPE) leakage of the CH(4) molecule to the reaction coordinate in the quasiclassical trajectory (QCT) calculations, the reaction threshold falls below the barrier height of the PES. The ZPE leakage leads to CH(3) and HCl coproducts with internal energy below its corresponding ZPEs. We have shown that a Gaussian binning (GB) analysis of the trajectories yields excitation functions in somehow better agreement with the experimental determinations. The HCl(v'=0) and DCl(v'=0) rotational distributions are as well very sensitive to the ZPE problem. The GB correction narrows and shifts the rotational distributions to lower values of the rotational quantum numbers. However, the present QCT rotational distributions are still hotter than the experimental distributions. In both reactions the angular distributions shift from backward peaked to sideways peaked as collision energy increases, as seen in the experiments and other theoretical calculations.

  3. Quasiclassical trajectory study of the Cl +CH4 reaction dynamics on a quadratic configuration interaction with single and double excitation interpolated potential energy surface

    NASA Astrophysics Data System (ADS)

    Castillo, J. F.; Aoiz, F. J.; Bañares, L.

    2006-09-01

    An ab initio interpolated potential energy surface (PES) for the Cl +CH4 reactive system has been constructed using the interpolation method of Collins and co-workers [J. Chem. Phys. 102, 5647 (1995); 108, 8302 (1998); 111, 816 (1999); Theor. Chem. Acc. 108, 313 (2002)]. The ab initio calculations have been performed using quadratic configuration interaction with single and double excitation theory to build the PES. A simple scaling all correlation technique has been used to obtain a PES which yields a barrier height and reaction energy in good agreement with high level ab initio calculations and experimental measurements. Using these interpolated PESs, a detailed quasiclassical trajectory study of integral and differential cross sections, product rovibrational populations, and internal energy distributions has been carried out for the Cl +CH4 and Cl +CD4 reactions, and the theoretical results have been compared with the available experimental data. It has been shown that the calculated total reaction cross sections versus collision energy for the Cl +CH4 and Cl +CD4 reactions is very sensitive to the barrier height. Besides, due to the zero-point energy (ZPE) leakage of the CH4 molecule to the reaction coordinate in the quasiclassical trajectory (QCT) calculations, the reaction threshold falls below the barrier height of the PES. The ZPE leakage leads to CH3 and HCl coproducts with internal energy below its corresponding ZPEs. We have shown that a Gaussian binning (GB) analysis of the trajectories yields excitation functions in somehow better agreement with the experimental determinations. The HCl(v'=0) and DCl(v'=0) rotational distributions are as well very sensitive to the ZPE problem. The GB correction narrows and shifts the rotational distributions to lower values of the rotational quantum numbers. However, the present QCT rotational distributions are still hotter than the experimental distributions. In both reactions the angular distributions shift from backward peaked to sideways peaked as collision energy increases, as seen in the experiments and other theoretical calculations.

  4. Nearest neighbor, bilinear interpolation and bicubic interpolation geographic correction effects on LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.

  5. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  6. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  7. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  8. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Dynamic factor modeling of ground and surface water levels in an agricultural area adjacent to Everglades National Park

    NASA Astrophysics Data System (ADS)

    Ritter, A.; Muñoz-Carpena, R.

    2006-02-01

    The extensive eastern boundary of Everglades National Park (ENP) in south Florida (USA) is subject to one the most expensive and ambitious environmental restoration projects in history. Understanding and predicting the interaction between the shallow aquifer and surface water is a key component for fine-tuning the process. The Frog Pond is an intensively instrumented agricultural 2023 ha area adjacent to ENP. The interactions among 21 multivariate daily time series (ground and surface water elevations, rainfall and evapotranspiration) available from this area were studied by means of dynamic factor analysis, a novel technique in the field of hydrology. This method is designed to determine latent or background effects governing variability or fluctuations in non-stationary time series. Water levels in 16 wells and two drainage ditch locations inside the area were selected as response variables, and canal levels and net recharge as explanatory variables. Elevations in the two canals delimiting the Frog Pond area were found to be the main factors explaining the response variables. This influence of canal elevations on water levels inside the area was complementary and inversely related to the distance between the observation point and each canal. Rainfall events do not affect daily water levels significantly but are responsible for instantaneous or localized groundwater responses that in some cases can be directly associated with the risk of flooding. This close coupling between surface and groundwater levels, that corroborates that found by other authors using different methods, could hinder on-going environmental restoration efforts in the area by bypassing the function of wetlands and other surface features. An empirical model with a reduced set of parameters was successfully developed and validated in the area by interpolating the results from the dynamic factor analysis across the spatial domain (coefficient of efficiency across the domain: 0.66-0.99). Although specific to the area, the resulting model is deemed useful for water management within the wide range of conditions similar to those present during the experimental period.

  10. Self-organizing map analysis using multivariate data from theophylline powders predicted by a thin-plate spline interpolation.

    PubMed

    Yasuda, Akihito; Onuki, Yoshinori; Kikuchi, Shingo; Takayama, Kozo

    2010-11-01

    The quality by design concept in pharmaceutical formulation development requires establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline powders were prepared based on the standard formulation. The angle of repose, compressibility, cohesion, and dispersibility were measured as the response variables. These responses were predicted quantitatively on the basis of a nonlinear TPS. A large amount of data on these powders was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the powders could be classified into several distinctive clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and powder characteristics. For instance, the quantities of microcrystalline cellulose (MCC) and magnesium stearate (Mg-St) were classified distinctly into each cluster, indicating that the quantities of MCC and Mg-St were crucial for determining the powder characteristics. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline powder formulations. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  11. Topsoil pollution forecasting using artificial neural networks on the example of the abnormally distributed heavy metal at Russian subarctic

    NASA Astrophysics Data System (ADS)

    Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.

    2017-06-01

    Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.

  12. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  13. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  14. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  15. Coastal bathymetry data collected in 2011 from the Chandeleur Islands, Louisiana

    USGS Publications Warehouse

    DeWitt, Nancy T.; Pfeiffer, William R.; Bernier, Julie C.; Buster, Noreen A.; Miselis, Jennifer L.; Flocks, James G.; Reynolds, Billy J.; Wiese, Dana S.; Kelso, Kyle W.

    2014-01-01

    This report serves as an archive of processed interferometric swath and single-beam bathymetry data. Geographic Iinformation System data products include a 50-meter cell-size interpolated bathymetry grid surface, trackline maps, and point data files. Additional files include error analysis maps, Field Activity Collection System logs, and formal Federal Geographic Data Committee metadata.

  16. Measurements of Wave Power in Wave Energy Converter Effectiveness Evaluation

    NASA Astrophysics Data System (ADS)

    Berins, J.; Berins, J.; Kalnacs, A.

    2017-08-01

    The article is devoted to the technical solution of alternative budget measuring equipment of the water surface gravity wave oscillation and the theoretical justification of the calculated oscillation power. This solution combines technologies such as lasers, WEB-camera image digital processing, interpolation of defined function at irregular intervals, volatility of discrete Fourier transformation for calculating the spectrum.

  17. Development and applications of a Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System

    NASA Astrophysics Data System (ADS)

    Warner, J. C.; Armstrong, B. N.; He, R.; Zambon, J. B.; Olabarrieta, M.; Voulgaris, G.; Kumar, N.; Haas, K. A.

    2012-12-01

    Understanding processes responsible for coastal change is important for managing both our natural and economic coastal resources. Coastal processes respond from both local scale and larger regional scale forcings. Understanding these processes can lead to significant insight into how the coastal zone evolves. Storms are one of the primary driving forces causing coastal change from a coupling of wave and wind driven flows. Here we utilize a numerical modeling approach to investigate these dynamics of coastal storm impacts. We use the Coupled Ocean - Atmosphere - Wave - Sediment Transport (COAWST) Modeling System that utilizes the Model Coupling Toolkit to exchange prognostic variables between the ocean model ROMS, atmosphere model WRF, wave model SWAN, and the Community Sediment Transport Modeling System (CSTMS) sediment routines. The models exchange fields of sea-surface temperature, ocean currents, water levels, bathymetry, wave heights, lengths, periods, bottom orbital velocities, and atmospheric surface heat and momentum fluxes, atmospheric pressure, precipitation, and evaporation. Data fields are exchanged using regridded flux conservative sparse matrix interpolation weights computed from the SCRIP spherical coordinate remapping interpolation package. We describe the modeling components and the model field exchange methods. As part of the system, the wave and ocean models run with cascading, refined, spatial grids to provide increased resolution, scaling down to resolve nearshore wave driven flows simulated by the vortex force formulation, all within selected regions of a larger, coarser-scale coastal modeling system. The ocean and wave models are driven by the atmospheric component, which is affected by wave dependent ocean-surface roughness and sea surface temperature which modify the heat and momentum fluxes at the ocean-atmosphere interface. We describe the application of the modeling system to several regions of multi-scale complexity to identify the significance of larger scale forcing cascading down to smaller scales and to investigate the interactions of the coupled system with increasing degree of model-model interactions. Three examples include the impact of Hurricane Ivan in 2004 in the Gulf of Mexico, Hurricane Ida in 2009 that evolved into a tropical storm on the US East coast, and passage of strong cold fronts across the US southeast. Results identify that hurricane intensity is extremely sensitive to sea-surface temperature, with a reduction in intensity when the atmosphere is coupled to the ocean model due to rapid cooling of the ocean from the surface through the mixed layer. Coupling of the ocean to the atmosphere also results in decreased boundary layer stress and coupling of the waves to the atmosphere results in increased sea-surface stress. Wave results are sensitive to both ocean and atmospheric coupling due to wave-current interactions with the ocean and wave-growth from the atmospheric wind stress. Sediment resuspension at regional scale during the hurricane is controlled by shelf width and wave propagation during hurricane approach. Results from simulation of passage of cold fronts suggest that synoptic meteorological systems can strongly impact surf zone and inner shelf response, therefore act as a strong driver for long term littoral sediment transport. We will also present some of the challenges faced to develop the modeling system.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. P. Jensen; Toto, T.

    Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less

  19. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level. © 2011 IEEE

  20. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  1. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  2. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  3. Retrieval of surface temperature by remote sensing. [of earth surface using brightness temperature of air pollutants

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1976-01-01

    A simple procedure and computer program were developed for retrieving the surface temperature from the measurement of upwelling infrared radiance in a single spectral region in the atmosphere. The program evaluates the total upwelling radiance at any altitude in the region of the CO fundamental band (2070-2220 1/cm) for several values of surface temperature. Actual surface temperature is inferred by interpolation of the measured upwelling radiance between the computed values of radiance for the same altitude. Sensitivity calculations were made to determine the effect of uncertainty in various surface, atmospheric and experimental parameters on the inferred value of surface temperature. It is found that the uncertainties in water vapor concentration and surface emittance are the most important factors affecting the accuracy of the inferred value of surface temperature.

  4. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  5. A closed-loop phase-locked interferometer for wide bandwidth position sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Andrew J., E-mail: Andrew.Fleming@Newcastle.edu.au; Routley, Ben S., E-mail: Ben.Routley@Newcastle.edu.au

    This article describes a position sensitive interferometer with closed-loop control of the reference mirror. A calibrated nanopositioner is used to lock the interferometer phase to the most sensitive point in the interferogram. In this configuration, large low-frequency movements of the sensor mirror can be detected from the control signal applied to the nanopositioner and high-frequency short-range signals can be measured directly from the photodiode. It is demonstrated that these two signals are complementary and can be summed to find the total displacement. The resulting interferometer has a number of desirable characteristics: it is optically simple, does not require polarization ormore » modulation to detect the direction of motion, does not require fringe-counting or interpolation electronics, and has a bandwidth equal to that of the photodiode. Experimental results demonstrate the frequency response analysis of a high-speed positioning stage. The proposed instrument is ideal for measuring the frequency response of nanopositioners, electro-optical components, MEMs devices, ultrasonic devices, and sensors such as surface acoustic wave detectors.« less

  6. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  7. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  8. Experiences with digital processing of images at INPE

    NASA Technical Reports Server (NTRS)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  9. Pixel-based absolute surface metrology by three flat test with shifted and rotated maps

    NASA Astrophysics Data System (ADS)

    Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang

    2018-03-01

    In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.

  10. A Novel Stimulus Artifact Removal Technique for High-Rate Electrical Stimulation

    PubMed Central

    Heffer, Leon F; Fallon, James B

    2008-01-01

    Electrical stimulus artifact corrupting electrophysiological recordings often make the subsequent analysis of the underlying neural response difficult. This is particularly evident when investigating short-latency neural activity in response to high-rate electrical stimulation. We developed and evaluated an off-line technique for the removal of stimulus artifact from electrophysiological recordings. Pulsatile electrical stimulation was presented at rates of up to 5000 pulses/s during extracellular recordings of guinea pig auditory nerve fibers. Stimulus artifact was removed by replacing the sample points at each stimulus artifact event with values interpolated along a straight line, computed from neighbouring sample points. This technique required only that artifact events be identifiable and that the artifact duration remained less than both the inter-stimulus interval and the time course of the action potential. We have demonstrated that this computationally efficient sample-and-interpolate technique removes the stimulus artifact with minimal distortion of the action potential waveform. We suggest that this technique may have potential applications in a range of electrophysiological recording systems. PMID:18339428

  11. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  12. Control surface in aerial triangulation

    NASA Astrophysics Data System (ADS)

    Jaw, Jen-Jer

    With the increased availability of surface-related sensors, the collection of surface information becomes easier and more straightforward than ever before. In this study, the author proposes a model in which the surface information is integrated into the aerial triangulation workflow by hypothesizing plane observations in the object space, the estimated object points via photo measurements (or matching) together with the adjusted surface points would provide a better point group describing the surface. The algorithms require no special structure of surface points and involve no interpolation process. The suggested measuring strategy (pairwise measurements) results in a quite fluent and favorable working environment when taking measurements. Furthermore, the extension of the model employing the the surface plane finds itself useful in tying photo models. The proposed model has been proven working by the simulation and carried out in the photogrammetric laboratory.

  13. Micromorphological characterization of zinc/silver particle composite coatings.

    PubMed

    Méndez, Alia; Reyes, Yolanda; Trejo, Gabriel; StĘpień, Krzysztof; Ţălu, Ştefan

    2015-12-01

    The aim of this study was to evaluate the three-dimensional (3D) surface micromorphology of zinc/silver particles (Zn/AgPs) composite coatings with antibacterial activity prepared using an electrodeposition technique. These 3D nanostructures were investigated over square areas of 5 μm × 5 μm by atomic force microscopy (AFM), fractal, and wavelet analysis. The fractal analysis of 3D surface roughness revealed that (Zn/AgPs) composite coatings have fractal geometry. Triangulation method, based on the linear interpolation type, applied for AFM data was employed in order to characterise the surfaces topographically (in amplitude, spatial distribution and pattern of surface characteristics). The surface fractal dimension Df , as well as height values distribution have been determined for the 3D nanostructure surfaces. © 2015 The Authors published by Wiley Periodicals, Inc.

  14. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  15. Comparing Vibrationally Averaged Nuclear Shielding Constants by Quantum Diffusion Monte Carlo and Second-Order Perturbation Theory.

    PubMed

    Ng, Yee-Hong; Bettens, Ryan P A

    2016-03-03

    Using the method of modified Shepard's interpolation to construct potential energy surfaces of the H2O, O3, and HCOOH molecules, we compute vibrationally averaged isotropic nuclear shielding constants ⟨σ⟩ of the three molecules via quantum diffusion Monte Carlo (QDMC). The QDMC results are compared to that of second-order perturbation theory (PT), to see if second-order PT is adequate for obtaining accurate values of nuclear shielding constants of molecules with large amplitude motions. ⟨σ⟩ computed by the two approaches differ for the hydrogens and carbonyl oxygen of HCOOH, suggesting that for certain molecules such as HCOOH where big displacements away from equilibrium happen (internal OH rotation), ⟨σ⟩ of experimental quality may only be obtainable with the use of more sophisticated and accurate methods, such as quantum diffusion Monte Carlo. The approach of modified Shepard's interpolation is also extended to construct shielding constants σ surfaces of the three molecules. By using a σ surface with the equilibrium geometry as a single data point to compute isotropic nuclear shielding constants for each descendant in the QDMC ensemble representing the ground state wave function, we reproduce the results obtained through ab initio computed σ to within statistical noise. Development of such an approach could thereby alleviate the need for any future costly ab initio σ calculations.

  16. Experiment of Rain Retrieval over Land Using Surface Emissivity Map Derived from TRMM TMI and JRA25

    NASA Astrophysics Data System (ADS)

    Furuzawa, Fumie; Masunaga, Hirohiko; Nakamura, Kenji

    2010-05-01

    We are developing a data-set of global land surface emissivity calculated from TRMM TMI brightness temperature (TB) and atmospheric profile data of Japanese 25-year Reanalysis Project (JRA-25) for the region identified as no-rain by TRMM PR, assuming zero cloud liquid water beyond 0-C level. For the evaluation, some characteristics of global monthly emissivity maps, for example, dependency of emissivity on each TMI frequency or each local time or seasonal/annual variation are checked. Moreover, these data are classified based on JRA25 land type or soilwetness and compared. Histogram of polarization difference of emissivity is similar to that of TB and mostly reflects the variability of land type or soil wetness, while histogram of vertical emissivity show a small difference. Next, by interpolating this instantaneous dataset with Gaussian function weighting, we derive an emissivity over neighboring rainy region and assess the interpolated emissivity by running radiative transfer model using PR rain profile and comparing with observed TB. Preliminary rain retrieval from the emissivities for some frequencies and TBs is evaluated based on PR rain profile and TMI rain rate. Moreover, another method is tested to estimate surface temperature from two emissivities, based on their statistical relation for each land type. We will show the results for vertical and horizontal emissivities of each frequency.

  17. Spatial dynamics of the invasive defoliator amber-marked birch leafminer across the Anchorage landscape

    Treesearch

    J.E. Lundquist; R.M. Reich; M. Tuffly

    2012-01-01

    The amber-marked birch leafminer has caused severe infestations of birch species in Anchorage, AK, since 2002. Its spatial distribution has been monitored since 2006 and summarized using interpolated surfaces based on simple kriging. In this study, we developed methods of assessing and describing spatial distribution of the leafminer as they vary from year to year, and...

  18. Error Estimation in an Optimal Interpolation Scheme for High Spatial and Temporal Resolution SST Analyses

    NASA Technical Reports Server (NTRS)

    Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn

    2010-01-01

    Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.

  19. Simulation of interaction between ground water in an alluvial aquifer and surface water in a large braided river

    USGS Publications Warehouse

    Leake, S.A.; Lilly, M.R.

    1995-01-01

    The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.

  20. Hydrographic Surveys for Six Water Bodies in Eastern Nebraska, 2005-07

    USGS Publications Warehouse

    Johnson, Michaela R.; Andersen, Michael J.; Sebree, Sonja K.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Nebraska Department of Environmental Quality, completed hydrographic surveys for six water bodies in eastern Nebraska: Maskenthine Wetland, Olive Creek Lake, Standing Bear Lake, Wagon Train Lake and Wetland, Wildwood Lake, and Yankee Hill Lake and sediment basin. The bathymetric data were collected using a boat-mounted survey-grade fathometer that operated at 200 kHz, and a differentially corrected Global Positioning System with antenna mounted directly above the echo-sounder transducer. Shallow-water and terrestrial areas were surveyed using a Real-Time Kinematic Global Positioning System. The bathymetric, shallow-water, and terrestrial data were processed in a geographic information system to generate a triangulated irregular network representation of the bottom of the water body. Bathymetric contours were interpolated from the triangulated irregular network data using a 2-foot contour interval. Bathymetric contours at the conservation pool elevation for Maskenthine Wetland, Yankee Hill Lake, and Yankee Hill sediment pond also were interpolated in addition to the 2-foot contours. The surface area and storage capacity of each lake or wetland were calculated for 1-foot intervals of water surface elevation and are tabulated in the Appendix for all water bodies.

  1. Compilation of Local Fallout Data from Test Detonations 1945-1962 Extracted from DASA 1251. Volume I. Continental U.S. Tests

    DTIC Science & Technology

    1979-05-01

    fallout patterns by "dot-dash" lines. The time lines are intended to give only a rough average arrival time in hours as estimated from the wind reports and...by interpolation between the H-lI and H+11 hour values. 4. The surface air pressure was 13.10 psi, the temperature -2.O°C and the relative humidity...surface air pressure was 13.04 psi, the temperature -2.8 0 C, and the relative humidity 87%. 17 i’ 17 I

  2. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  3. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  4. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  5. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  6. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  7. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  8. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  9. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  10. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  11. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  12. Comparison of sEMG processing methods during whole-body vibration exercise.

    PubMed

    Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S

    2015-12-01

    The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  14. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  15. Frontal Representation as a Metric of Model Performance

    NASA Astrophysics Data System (ADS)

    Douglass, E.; Mask, A. C.

    2017-12-01

    Representation of fronts detected by altimetry are used to evaluate the performance of the HYCOM global operational product. Fronts are detected and assessed in daily alongtrack altimetry. Then, modeled sea surface height is interpolated to the locations of the alongtrack observations, and the same frontal detection algorithm is applied to the interpolated model output. The percentage of fronts found in the altimetry and replicated in the model gives a score (0-100) that assesses the model's ability to replicate fronts in the proper location with the proper orientation. Further information can be obtained from determining the number of "extra" fronts found in the model but not in the altimetry, and from assessing the horizontal and vertical dimensions of the front in the model as compared to observations. Finally, the sensitivity of this metric to choices regarding the smoothing of noisy alongtrack altimetry observations, and to the minimum size of fronts being analyzed, is assessed.

  16. New, simplified, interpolation method for estimation of microscopic nuclear masses based on the p-factor, P = N/sub P/N/sub N//(N/sub p/+N/sub n/)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.; Brenner, D.S.; Casten, R.F.

    1987-12-10

    A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less

  17. A climatically-derived global soil moisture data set for use in the GLAS atmospheric circulation model seasonal cycle experiment

    NASA Technical Reports Server (NTRS)

    Willmott, C. J.; Field, R. T.

    1984-01-01

    Algorithms for point interpolation and contouring on the surface of the sphere and in Cartesian two-space are developed from Shepard's (1968) well-known, local search method. These mapping procedures then are used to investigate the errors which appear on small-scale climate maps as a result of the all-too-common practice of of interpolating, from irregularly spaced data points to the nodes of a regular lattice, and contouring Cartesian two-space. Using mean annual air temperatures field over the western half of the northern hemisphere is estimated both on the sphere, assumed to be correct, and in Cartesian two-space. When the spherically- and Cartesian-approximted air temperature fields are mapped and compared, the magnitudes (as large as 5 C to 10 C) and distribution of the errors associated with the latter approach become apparent.

  18. Self-organizing map analysis using multivariate data from theophylline tablets predicted by a thin-plate spline interpolation.

    PubMed

    Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Yamamoto, Rie; Takayama, Kozo

    2013-01-01

    The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared based on a standard formulation. The tensile strength, disintegration time, and stability of these variables were measured as response variables. These responses were predicted quantitatively based on nonlinear TPS. A large amount of data on these tablets was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the tablets were classified into several distinct clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and tablet characteristics. The results of this study suggest that increasing the proportion of microcrystalline cellulose (MCC) improved the tensile strength and the stability of tensile strength of these theophylline tablets. In addition, the proportion of MCC has an optimum value for disintegration time and stability of disintegration. Increasing the proportion of magnesium stearate extended disintegration time. Increasing the compression force improved tensile strength, but degraded the stability of disintegration. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulations.

  19. Assessing the Suitability and Limitations of Satellite-based Measurements for Estimating CO, CO2, NO2 and O3 Concentrations over the Niger Delta

    NASA Astrophysics Data System (ADS)

    Fagbeja, M. A.; Hill, J. L.; Chatterton, T. J.; Longhurst, J. W.; Akinyede, J. O.

    2011-12-01

    Space-based satellite sensor technology may provide important tools in the study and assessment of national, regional and local air pollution. However, the application of optical satellite sensor observation of atmospheric trace gases, including those considered to be 'air pollutants', within the lower latitudes is limited due to prevailing climatic conditions. The lack of appropriate air pollution ground monitoring stations within the tropical belt reduces the ability to verify and calibrate space-based measurements. This paper considers the suitability of satellite remotely sensed data in estimating concentrations of atmospheric trace gases in view of the prevailing climate over the Niger Delta region. The methodological approach involved identifying suitable satellite data products and using the ArcGIS Geostatistical Analyst kriging interpolation technique to generate surface concentrations from satellite column measurements. The observed results are considered in the context of the climate of the study area. Using data from January 2001 to December 2005, an assessment of the suitability of satellite sensor data to interpolate column concentrations of trace gases over the Niger Delta has been undertaken and indicates varying degrees of reliability. The level of reliability of the interpolated surfaces is predicated on the number and spatial distributions of column measurements. Accounting for the two climatic seasons in the region, the interpolation of total column concentrations of CO and CO2 from SCIAMACHY produced both reliable and unreliable results over inland parts of the region during the dry season, while mainly unreliable results are observed over the coastal parts especially during the rainy season due to inadequate column measurements. The interpolation of tropospheric measurements of NO2 and O3 from GOME and OMI respectively produced reliable results all year. This is thought to be due to the spatial distribution of available column measurements, which were more regularly distributed over the region than the total column measurements of CO and CO2. Observations also indicated higher concentrations during the dry season than the wet seasons. The observed trend in the concentration of tropospheric O3 was as expected, considering the observed concentrations of precursor gases of CO and NO2. Whilst satellites currently play a significant role in the assessment of global air pollution and the long-range transport of air pollutants, the technology is faced with limitations in assessing ground level concentrations of pollutants. These limitations restrict the extent to which both pollution emissions and impacts of receptors can be accurately assessed. Further research is required to improve the capability of satellite sensors to observe atmospheric pollutants within the lower troposphere, where pollution has the most direct impacts on humans and ecosystems.

  20. Hermite-Birkhoff interpolation in the nth roots of unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.

    1980-06-01

    Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.

  1. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  2. Comparison of two fractal interpolation methods

    NASA Astrophysics Data System (ADS)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has strong periodicity, which is suitable for simulating periodic surface.

  3. On-chip frame memory reduction using a high-compression-ratio codec in the overdrives of liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha

    2010-11-01

    Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.

  4. Predicting Numbers of Problems in Development of Software

    NASA Technical Reports Server (NTRS)

    Simonds, Charles H.

    2005-01-01

    A method has been formulated to enable prediction of the amount of work that remains to be performed in developing flight software for a spacecraft. The basic concept embodied in the method is that of using an idealized curve (specifically, the Weibull function) to interpolate from (1) the numbers of problems discovered thus far to (2) a goal of discovering no new problems after launch (or six months into the future for software already in use in orbit). The steps of the method can be summarized as follows: 1. Take raw data in the form of problem reports (PRs), including the dates on which they are generated. 2. Remove, from the data collection, PRs that are subsequently withdrawn or to which no response is required. 3. Count the numbers of PRs created in 1-week periods and the running total number of PRs each week. 4. Perform the interpolation by making a least-squares fit of the Weibull function to (a) the cumulative distribution of PRs gathered thus far and (b) the goal of no more PRs after the currently anticipated launch date. The interpolation and the anticipated launch date are subject to iterative re-estimation.

  5. Remote sensing, geographical information systems, and spatial modeling for analyzing public transit services

    NASA Astrophysics Data System (ADS)

    Wu, Changshan

    Public transit service is a promising transportation mode because of its potential to address urban sustainability. Current ridership of public transit, however, is very low in most urban regions, particularly those in the United States. This woeful transit ridership can be attributed to many factors, among which poor service quality is key. Given this, there is a need for transit planning and analysis to improve service quality. Traditionally, spatially aggregate data are utilized in transit analysis and planning. Examples include data associated with the census, zip codes, states, etc. Few studies, however, address the influences of spatially aggregate data on transit planning results. In this research, previous studies in transit planning that use spatially aggregate data are reviewed. Next, problems associated with the utilization of aggregate data, the so-called modifiable areal unit problem (MAUP), are detailed and the need for fine resolution data to support public transit planning is argued. Fine resolution data is generated using intelligent interpolation techniques with the help of remote sensing imagery. In particular, impervious surface fraction, an important socio-economic indicator, is estimated through a fully constrained linear spectral mixture model using Landsat Enhanced Thematic Mapper Plus (ETM+) data within the metropolitan area of Columbus, Ohio in the United States. Four endmembers, low albedo, high albedo, vegetation, and soil are selected to model heterogeneous urban land cover. Impervious surface fraction is estimated by analyzing low and high albedo endmembers. With the derived impervious surface fraction, three spatial interpolation methods, spatial regression, dasymetric mapping, and cokriging, are developed to interpolate detailed population density. Results suggest that cokriging applied to impervious surface is a better alternative for estimating fine resolution population density. With the derived fine resolution data, a multiple route maximal covering/shortest path (MRMCSP) model is proposed to address the tradeoff between public transit service quality and access coverage in an established bus-based transit system. Results show that it is possible to improve current transit service quality by eliminating redundant or underutilized service stops. This research illustrates that fine resolution data can be efficiently generated to support urban planning, management and analysis. Further, this detailed data may necessitate the development of new spatial optimization models for use in analysis.

  6. Determination of Arctic sea ice thickness in the winter of 2007

    NASA Astrophysics Data System (ADS)

    Calvao, J.; Wadhams, P.; Rodrigues, J.

    2009-04-01

    The L3H phase of operation of ICESat's laser in the winter of 2007 coincided for about two weeks with the cruise of the British submarine Tireless where upward-looking and multibeam sonar systems were mounted, thus providing the first opportunity for a simultaneous determination of the sea ice freeboard and draft in the Arctic Ocean. ICESat satellite carries a laser altimeter dedicated to the observation of polar regions, generating accurate profiles of surface topography along the tracks (footprint diameter 70m), which can be inverted to determine sea-ice freeboard heights using a "lowest level" filtering scheme. The procedure applied to obtain the ice freeboard F=h-N-MDT (where h is the ICESat ellipsoidal height estimate, N is the geoid undulation and MDT is the ocean mean dynamic topography) for the whole Arctic basin (with the exception of points beyond 86N) consisted of a high-pass filtering of the satellite data to remove low frequency effects due to the geoid and ocean dynamics (the geoid model ArcGP with sufficient accuracy to allow the computation of the freeboard was very recently made available). The original tide model was replaced by the tide model AOTIM5 and the tide loading model TPXO6.2. The inverse barometer correction was computed. As there are no MDT models with enough accuracy, it is necessary to identify leads of open water or thin ice to allow the interpolation of the ocean surface, using surface reflectivity and waveform shape. Several solutions were tested to define the ocean surface and the computed freeboard values were interpolated on a 5x5 minute grid, where the submarine track was interpolated. At the same time, along-track single beam upward-looking sonar data were recorded using an Admiralty pattern 780 echo sounder carried by the Tireless, from where we have generated an ice draft profile of about 8,000km between Fram Strait and the North coast of Alaska and back. The merging of the two data sets provides a new insight into the present Arctic sea ice thickness distribution while a comparison with results obtained by previous submarines cruises and previous phases of operations of ICESat allows a fresh evaluation of the rate of sea ice thinning.

  7. Spherical Demons: Fast Surface Registration

    PubMed Central

    Yeo, B.T. Thomas; Sabuncu, Mert; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2009-01-01

    We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast – registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces. PMID:18979813

  8. Spherical demons: fast surface registration.

    PubMed

    Yeo, B T Thomas; Sabuncu, Mert; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2008-01-01

    We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast - registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.

  9. Interpolations of groundwater table elevation in dissected uplands.

    PubMed

    Chung, Jae-won; Rogers, J David

    2012-01-01

    The variable elevation of the groundwater table in the St. Louis area was estimated using multiple linear regression (MLR), ordinary kriging, and cokriging as part of a regional program seeking to assess liquefaction potential. Surface water features were used to determine the minimum water table for MLR and supplement the principal variables for ordinary kriging and cokriging. By evaluating the known depth to the water and the minimum water table elevation, the MLR analysis approximates the groundwater elevation for a contiguous hydrologic system. Ordinary kriging and cokriging estimate values in unsampled areas by calculating the spatial relationships between the unsampled and sampled locations. In this study, ordinary kriging did not incorporate topographic variations as an independent variable, while cokriging included topography as a supporting covariable. Cross validation suggests that cokriging provides a more reliable estimate at known data points with less uncertainty than the other methods. Profiles extending through the dissected uplands terrain suggest that: (1) the groundwater table generated by MLR mimics the ground surface and elicits a exaggerated interpolation of groundwater elevation; (2) the groundwater table estimated by ordinary kriging tends to ignore local topography and exhibits oversmoothing of the actual undulations in the water table; and (3) cokriging appears to give the realistic water surface, which rises and falls in proportion to the overlying topography. The authors concluded that cokriging provided the most realistic estimate of the groundwater surface, which is the key variable in assessing soil liquefaction potential in unconsolidated sediments. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  10. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  11. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  13. Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media

    DTIC Science & Technology

    2015-09-24

    TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR

  14. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  15. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  16. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  17. Note: Increasing dynamic range of digital-to-analog converter using a superconducting quantum interference device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Masakazu, E-mail: m.nakanishi@aist.go.jp

    Responses of a superconducting quantum interference device (SQUID) are periodically dependent on magnetic flux coupling to its superconducting ring and the period is a flux quantum (Φ{sub o} = h/2e, where h and e, respectively, express Planck's constant and elementary charge). Using this periodicity, we had proposed a digital to analog converter using a SQUID (SQUID DAC) of first generation with linear current output, interval of which corresponded to Φ{sub o}. Modification for increasing dynamic range by interpolating within each interval is reported. Linearity of the interpolation was also based on the quantum periodicity. A SQUID DAC with dynamic rangemore » of about 1.4 × 10{sup 7} was created as a demonstration.« less

  18. Review of image processing fundamentals

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1985-01-01

    Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

  19. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  20. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  1. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  2. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion.

    PubMed

    Wilson, Jordan L; Samaranayake, V A; Limmer, Matt A; Burken, Joel G

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman's correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years.

  3. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion

    USGS Publications Warehouse

    Wilson, Jordan L.; Samaranayake, V.A.; Limmer, Matthew A.; Burken, Joel G.

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman’s correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years.

  4. Surface modeling of soil antibiotics.

    PubMed

    Shi, Wen-jiao; Yue, Tian-xiang; Du, Zheng-ping; Wang, Zong; Li, Xue-wen

    2016-02-01

    Large numbers of livestock and poultry feces are continuously applied into soils in intensive vegetable cultivation areas, and then some veterinary antibiotics are persistent existed in soils and cause health risk. For the spatial heterogeneity of antibiotic residues, developing a suitable technique to interpolate soil antibiotic residues is still a challenge. In this study, we developed an effective interpolator, high accuracy surface modeling (HASM) combined vegetable types, to predict the spatial patterns of soil antibiotics, using 100 surface soil samples collected from an intensive vegetable cultivation area located in east of China, and the fluoroquinolones (FQs), including ciprofloxacin (CFX), enrofloxacin (EFX) and norfloxacin (NFX), were analyzed as the target antibiotics. The results show that vegetable type is an effective factor to be combined to improve the interpolator performance. HASM achieves less mean absolute errors (MAEs) and root mean square errors (RMSEs) for total FQs (NFX+CFX+EFX), NFX, CFX and EFX than kriging with external drift (KED), stratified kriging (StK), ordinary kriging (OK) and inverse distance weighting (IDW). The MAE of HASM for FQs is 55.1 μg/kg, and the MAEs of KED, StK, OK and IDW are 99.0 μg/kg, 102.8 μg/kg, 106.3 μg/kg and 108.7 μg/kg, respectively. Further, RMSE simulated by HASM for FQs (CFX, EFX and NFX) are 106.2 μg/kg (88.6 μg/kg, 20.4 μg/kg and 39.2 μg/kg), and less 30% (27%, 22% and 36%), 33% (27%, 27% and 43%), 38% (34%, 23% and 41%) and 42% (32%, 35% and 51%) than the ones by KED, StK, OK and IDW, respectively. HASM also provides better maps with more details and more consistent maximum and minimum values of soil antibiotics compared with the measured data. The better performance can be concluded that HASM takes the vegetable type information as global approximate information, and takes local sampling data as its optimum control constraints. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion

    PubMed Central

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman’s correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years. PMID:29451904

  6. Airborne laser scanning for forest health status assessment and radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Novotny, Jan; Zemek, Frantisek; Pikl, Miroslav; Janoutova, Ruzena

    2013-04-01

    Structural parameters of forest stands/ecosystems are an important complementary source of information to spectral signatures obtained from airborne imaging spectroscopy when quantitative assessment of forest stands are in the focus, such as estimation of forest biomass, biochemical properties (e.g. chlorophyll /water content), etc. The parameterization of radiative transfer (RT) models used in latter case requires three-dimensional spatial distribution of green foliage and woody biomass. Airborne LiDAR data acquired over forest sites bears these kinds of 3D information. The main objective of the study was to compare the results from several approaches to interpolation of digital elevation model (DEM) and digital surface model (DSM). We worked with airborne LiDAR data with different density (TopEye Mk II 1,064nm instrument, 1-5 points/m2) acquired over the Norway spruce forests situated in the Beskydy Mountains, the Czech Republic. Three different interpolation algorithms with increasing complexity were tested: i/Nearest neighbour approach implemented in the BCAL software package (Idaho Univ.); ii/Averaging and linear interpolation techniques used in the OPALS software (Vienna Univ. of Technology); iii/Active contour technique implemented in the TreeVis software (Univ. of Freiburg). We defined two spatial resolutions for the resulting coupled raster DEMs and DSMs outputs: 0.4 m and 1 m, calculated by each algorithm. The grids correspond to the same spatial resolutions of hyperspectral imagery data for which the DEMs were used in a/geometrical correction and b/building a complex tree models for radiative transfer modelling. We applied two types of analyses when comparing between results from the different interpolations/raster resolution: 1/calculated DEM or DSM between themselves; 2/comparison with field data: DEM with measurements from referential GPS, DSM - field tree alometric measurements, where tree height was calculated as DSM-DEM. The results of the analyses show that: 1/averaging techniques tend to underestimate the tree height and the generated surface does not follow the first LiDAR echoes both for 1 m and 0.4 m pixel size; 2/we did not find any significant difference between tree heights calculated by nearest neighbour algorithm and the active contour technique for 1 m pixel output but the difference increased with finer resolution (0.4 m); 3/the accuracy of the DEMs calculated by tested algorithms is similar.

  7. Estimates of Flow Duration, Mean Flow, and Peak-Discharge Frequency Values for Kansas Stream Locations

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.

  8. Improving a spatial rainfall product using a data-mining approach and its effect on the hydrological response of a meso-scale catchment.

    NASA Astrophysics Data System (ADS)

    Oriani, F.; Stisen, S.; Demirel, C.

    2017-12-01

    The spatial representation of rainfall is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the daily grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. The research is focused on the Skjern catchment (1,050 km2 western Jutland), where we can dispose of the complete rain-gauge database from the Danish Hydrological Observatory and compute the distributed hydrological response at the 1-km scale.To give a better estimation of the gridded rainfall input, we start from ground measurements by simulating the missing data with a stochastic data-mining approach, then we compute again the grid interpolation. To maximize the predictive power of the technique, combinations of station time-series that are the most informative to each other are selected on the basis of their correlation and available historical data. Then, the missing data inside these time-series are simulated together using the direct sampling technique (DS) [1, 2]. DS simulates a datum by sampling the historical record of the same stations where a similar data pattern occurs, preserving their complex statistical relation. The simulated data are reinjected in the whole dataset and used as well as conditioning data to progressively fill up the gaps in other stations.The results show that the proposed methodology, tested on the period 1995-2012, can increase the realism of the grid rainfall product by regenerating the missing ground measurements. The hydrological response is analyzed considering the observations at 5 hydrological stations. The presented methodology can be used in many regions to regenerate the missing data using the information contained in the historical record and propagate the uncertainty of the prediction to the hydrological response. [1] G.Mariethoz et al. (2010), Water Resour. Res., 10.1029/2008WR007621.[2] F. Oriani et al. (2014), Hydrol. Earth Syst. Sc., 10.5194/hessd-11-3213-2014.

  9. Optimization of Premix Powders for Tableting Use.

    PubMed

    Todo, Hiroaki; Sato, Kazuki; Takayama, Kozo; Sugibayashi, Kenji

    2018-05-08

    Direct compression is a popular choice as it provides the simplest way to prepare the tablet. It can be easily adopted when the active pharmaceutical ingredient (API) is unstable in water or to thermal drying. An optimal formulation of preliminary mixed powders (premix powders) is beneficial if prepared in advance for tableting use. The aim of this study was to find the optimal formulation of the premix powders composed of lactose (LAC), cornstarch (CS), and microcrystalline cellulose (MCC) by using statistical techniques. Based on the "Quality by Design" concept, a (3,3)-simplex lattice design consisting of three components, LAC, CS, and MCC was employed to prepare the model premix powders. Response surface method incorporating a thin-plate spline interpolation (RSM-S) was applied for estimation of the optimum premix powders for tableting use. The effect of tablet shape identified by the surface curvature on the optimization was investigated. The optimum premix powder was effective when the premix was applied to a small quantity of API, although the function of premix was limited in the case of the formulation of large amount of API. Statistical techniques are valuable to exploit new functions of well-known materials such as LAC, CS, and MCC.

  10. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  11. On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians

    NASA Astrophysics Data System (ADS)

    Valverde, Clodoaldo; Baseia, Basílio

    2018-01-01

    We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.

  12. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  13. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  14. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  15. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  16. Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations

    DTIC Science & Technology

    2008-02-01

    Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates

  17. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  18. [Monitoring the thermal plume from coastal nuclear power plant using satellite remote sensing data: modeling, and validation].

    PubMed

    Zhu, Li; Zhao, Li-Min; Wang, Qiao; Zhang, Ai-Ling; Wu, Chuan-Qing; Li, Jia-Guo; Shi, Ji-Xiang

    2014-11-01

    Thermal plume from coastal nuclear power plant is a small-scale human activity, mornitoring of which requires high-frequency and high-spatial remote sensing data. The infrared scanner (IRS), on board of HJ-1B, has an infrared channel IRS4 with 300 m and 4-days as its spatial and temporal resolution. Remote sensing data aquired using IRS4 is an available source for mornitoring thermal plume. Retrieval pattern for coastal sea surface temperature (SST) was built to monitor the thermal plume from nuclear power plant. The research area is located near Guangdong Daya Bay Nuclear Power Station (GNPS), where synchronized validations were also implemented. The National Centers for Environmental Prediction (NCEP) data was interpolated spatially and temporally. The interpolated data as well as surface weather conditions were subsequently employed into radiative transfer model for the atmospheric correction of IRS4 thermal image. A look-up-table (LUT) was built for the inversion between IRS4 channel radiance and radiometric temperature, and a fitted function was also built from the LUT data for the same purpose. The SST was finally retrieved based on those preprocessing procedures mentioned above. The bulk temperature (BT) of 84 samples distributed near GNPS was shipboard collected synchronically using salinity-temperature-deepness (CTD) instruments. The discrete sample data was surface interpolated and compared with the satellite retrieved SST. Results show that the average BT over the study area is 0.47 degrees C higher than the retrieved skin temperature (ST). For areas far away from outfall, the ST is higher than BT, with differences less than 1.0 degrees C. The main driving force for temperature variations in these regions is solar radiation. For areas near outfall, on the contrary, the retrieved ST is lower than BT, and greater differences between the two (meaning > 1.0 degrees C) happen when it gets closer to the outfall. Unlike the former case, the convective heat transfer resulting from the thermal plume is the primary reason leading to the temperature variations. Temperature rising (TR) distributions obtained from remote sensing data and in-situ measurements are consistent, except that the interpolated BT shows more level details (> 5 levels) than that of the ST (up to 4 levels). The areas with higher TR levels (> 2) are larger on BT maps, while for lower TR levels (≤ 2), the two methods perform with no obvious differences. Minimal errors for satellite-derived SST occur regularly around local time 10 a. m. This makes the remote sensing results to be substitutes for in-situ measurements. Therefore, for operational applications of HJ-1B IRS4, remote sensing technique can be a practical approach to monitoring the nuclear plant thermal pollution around this time period.

  19. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  20. Application of Pressure Sensitive Paint in Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Carbonaro, Mario; Zemsch, Stephan

    1995-01-01

    It is well known in the aerodynamic field that pressure distribution measurement over the surface of an aircraft model is a problem in experimental aerodynamics. For one thing, a continuous pressure map can not be obtained with the current experimental methods since they are discrete. Therefore, interpolation or CFD methods must be used for a more complete picture of the phenomenon under study. For this study, a new technique was investigated which would provide a continuous pressure distribution over the surface under consideration. The new method is pressure sensitive paint. When pressure sensitive paint is applied to an aerodynamic surface and placed in an operating wind-tunnel under appropriate lighting, the molecules luminesce as a function of the local pressure of oxygen over the surface of interest during aerodynamic flow. The resulting image will be brightest in the areas of low pressure (low oxygen concentration), and less intense in the areas of high pressure (where oxygen is most abundant on the surface). The objective of this investigation was to use pressure sensitive paint samples from McDonnell Douglas (MDD) for calibration purpose in order to assess the response of the paint under appropriate lighting and to use the samples over a flat plate/conical fin mounted at 75 degrees from the center of the plate in order to study the shock/boundary layer interaction at Mach 6 in the Von Karman wind-tunnel. From the result obtained it was concluded that temperature significantly affects the response of the paint and should be given the uppermost attention in the case of hypersonic flows. Also, it was found that past a certain temperature threshold, the paint intensity degradation became irreversible. The comparison between the pressure tap measurement and the pressure sensitive paint showed the right trend. However, there exists a shift when it comes to the actual value. Therefore, further investigation is under way to find the cause of the shift.

  1. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  2. INTERPOL survey of the use of speaker identification by law enforcement agencies.

    PubMed

    Morrison, Geoffrey Stewart; Sahito, Farhan Hyder; Jardine, Gaëlle; Djokic, Djordje; Clavet, Sophie; Berghs, Sabine; Goemans Dorny, Caroline

    2016-06-01

    A survey was conducted of the use of speaker identification by law enforcement agencies around the world. A questionnaire was circulated to law enforcement agencies in the 190 member countries of INTERPOL. 91 responses were received from 69 countries. 44 respondents reported that they had speaker identification capabilities in house or via external laboratories. Half of these came from Europe. 28 respondents reported that they had databases of audio recordings of speakers. The clearest pattern in the responses was that of diversity. A variety of different approaches to speaker identification were used: The human-supervised-automatic approach was the most popular in North America, the auditory-acoustic-phonetic approach was the most popular in Europe, and the spectrographic/auditory-spectrographic approach was the most popular in Africa, Asia, the Middle East, and South and Central America. Globally, and in Europe, the most popular framework for reporting conclusions was identification/exclusion/inconclusive. In Europe, the second most popular framework was the use of verbal likelihood ratio scales. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. High order Nyström method for elastodynamic scattering

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Gurrala, Praveen; Song, Jiming; Roberts, Ron

    2016-02-01

    Elastic waves in solids find important applications in ultrasonic non-destructive evaluation. The scattering of elastic waves has been treated using many approaches like the finite element method, boundary element method and Kirchhoff approximation. In this work, we propose a novel accurate and efficient high order Nyström method to solve the boundary integral equations for elastodynamic scattering problems. This approach employs high order geometry description for the element, and high order interpolation for fields inside each element. Compared with the boundary element method, this approach makes the choice of the nodes for interpolation based on the Gaussian quadrature, which renders matrix elements for far field interaction free from integration, and also greatly simplifies the process for singularity and near singularity treatment. The proposed approach employs a novel efficient near singularity treatment that makes the solver able to handle extreme geometries like very thin penny-shaped crack. Numerical results are presented to validate the approach. By using the frequency domain response and performing the inverse Fourier transform, we also report the time domain response of flaw scattering.

  4. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  5. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  6. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  7. Quantum realization of the nearest neighbor value interpolation method for INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping

    2018-07-01

    This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.

  8. Surface morphology of a modified ballistic deposition model.

    PubMed

    Banerjee, Kasturi; Shamanna, J; Ray, Subhankar

    2014-08-01

    The surface and bulk properties of a modified ballistic deposition model are investigated. The deposition rule interpolates between nearest- and next-nearest-neighbor ballistic deposition and the random deposition models. The stickiness of the depositing particle is controlled by a parameter and the type of interparticle force. Two such forces are considered: Coulomb and van der Waals type. The interface width shows three distinct growth regions before eventual saturation. The rate of growth depends more strongly on the stickiness parameter than on the type of interparticle force. However, the porosity of the deposits is strongly influenced by the interparticle force.

  9. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Murman, S. M.; Berger, M. J.

    2003-01-01

    This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.

  10. BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-06-01

    Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.

  11. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  12. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  13. Observed Trend in Surface Wind Speed Over the Conterminous USA and CMIP5 Simulations

    NASA Technical Reports Server (NTRS)

    Hashimoto, Hirofumi; Nemani, Ramakrishna R.

    2016-01-01

    There has been no spatial surface wind map even over the conterminous USA due to the difficulty of spatial interpolation of wind field. As a result, the reanalysis data were often used to analyze the statistics of spatial pattern in surface wind speed. Unfortunately, no consistent trend in wind field was found among the available reanalysis data, and that obstructed the further analysis or projection of spatial pattern of wind speed. In this study, we developed the methodology to interpolate the observed wind speed data at weather stations using random forest algorithm. We produced the 1-km daily climate variables over the conterminous USA from 1979 to 2015. The validation using Ameriflux daily data showed that R2 is 0.59. Existing studies have found the negative trend over the Eastern US, and our study also showed same results. However, our new datasets also revealed the significant increasing trend over the southwest US especially from April to June. The trend in the southwestern US represented change or seasonal shift in North American Monsoon. Global analysis of CMIP5 data projected the decrease trend in mid-latitude, while increase trend in tropical region over the land. Most likely because of the low resolution in GCM, CMIP5 data failed to simulate the increase trend in the southwest US, even though it was qualitatively predicted that pole ward shift of anticyclone help the North American Monsoon.

  14. Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks

    NASA Astrophysics Data System (ADS)

    Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas

    2017-03-01

    PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.

  15. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  16. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  17. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  18. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  19. Detection of Spatially Unresolved (Nominally Sub-Pixel) Submerged and Surface Targets Using Hyperspectral Data

    DTIC Science & Technology

    2012-09-01

    Feasibility (MT Modeling ) a. Continuum of mixture distributions interpolated b. Mixture infeasibilities calculated for each pixel c. Valid detections...Visible/Infrared Imaging Spectrometer BRDF Bidirectional Reflectance Distribution Function CASI Compact Airborne Spectrographic Imager CCD...filtering (MTMF), and was designed by Healey and Slater (1999) to use “a physical model to generate the set of sensor spectra for a target that will be

  20. Data Descriptor: TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    Treesearch

    John T. Abatzoglou; Solomon Z. Dobrowski; Sean A. Parks; Katherine C. Hegewisch

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958–2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from...

  1. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  2. Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Niu, Xiamu

    2016-01-01

    This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.

  3. Pycortex: an interactive surface visualizer for fMRI

    PubMed Central

    Gao, James S.; Huth, Alexander G.; Lescroart, Mark D.; Gallant, Jack L.

    2015-01-01

    Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software. PMID:26483666

  4. Southern Ocean Mixed-Layer Seasonal and Interannual Variations From Combined Satellite and In Situ Data

    NASA Astrophysics Data System (ADS)

    Buongiorno Nardelli, B.; Guinehut, S.; Verbrugge, N.; Cotroneo, Y.; Zambianchi, E.; Iudicone, D.

    2017-12-01

    The depth of the upper ocean mixed layer provides fundamental information on the amount of seawater that directly interacts with the atmosphere. Its space-time variability modulates water mass formation and carbon sequestration processes related to both the physical and biological pumps. These processes are particularly relevant in the Southern Ocean, where surface mixed-layer depth estimates are generally obtained either as climatological fields derived from in situ observations or through numerical simulations. Here we demonstrate that weekly observation-based reconstructions can be used to describe the variations of the mixed-layer depth in the upper ocean over a range of space and time scales. We compare and validate four different products obtained by combining satellite measurements of the sea surface temperature, salinity, and dynamic topography and in situ Argo profiles. We also compute an ensemble mean and use the corresponding spread to estimate mixed-layer depth uncertainties and to identify the more reliable products. The analysis points out the advantage of synergistic approaches that include in input the sea surface salinity observations obtained through a multivariate optimal interpolation. Corresponding data allow to assess mixed-layer depth seasonal and interannual variability. Specifically, the maximum correlations between mixed-layer anomalies and the Southern Annular Mode are found at different time lags, related to distinct summer/winter responses in the Antarctic Intermediate Water and Sub-Antarctic Mode Waters main formation areas.

  5. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  6. Design space construction of multiple dose-strength tablets utilizing bayesian estimation based on one set of design-of-experiments.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-01-01

    Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.

  7. Micromorphological characterization of zinc/silver particle composite coatings

    PubMed Central

    Méndez, Alia; Reyes, Yolanda; Trejo, Gabriel; StĘpień, Krzysztof

    2015-01-01

    ABSTRACT The aim of this study was to evaluate the three‐dimensional (3D) surface micromorphology of zinc/silver particles (Zn/AgPs) composite coatings with antibacterial activity prepared using an electrodeposition technique. These 3D nanostructures were investigated over square areas of 5 μm × 5 μm by atomic force microscopy (AFM), fractal, and wavelet analysis. The fractal analysis of 3D surface roughness revealed that (Zn/AgPs) composite coatings have fractal geometry. Triangulation method, based on the linear interpolation type, applied for AFM data was employed in order to characterise the surfaces topographically (in amplitude, spatial distribution and pattern of surface characteristics). The surface fractal dimension D f, as well as height values distribution have been determined for the 3D nanostructure surfaces. Microsc. Res. Tech. 78:1082–1089, 2015. © 2015 The Authors published by Wiley Periodicals, Inc. PMID:26500164

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  9. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  10. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  11. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  12. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  13. Analysis of the ability of large-scale reanalysis data to define Siberian fire danger in preparation for future fire prediction

    NASA Astrophysics Data System (ADS)

    Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.

  14. Spatial interpolation techniques using R

    EPA Science Inventory

    Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...

  15. Surface temperature dataset for North America obtained by application of optimal interpolation algorithm merging tree-ring chronologies and climate model output

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2017-02-01

    A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.

  16. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    NASA Astrophysics Data System (ADS)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  17. The OceanFlux Greenhouse Gases methodology for deriving a sea surface climatology of CO2 fugacity in support of air-sea gas flux studies

    NASA Astrophysics Data System (ADS)

    Goddijn-Murphy, L. M.; Woolf, D. K.; Land, P. E.; Shutler, J. D.; Donlon, C.

    2015-07-01

    Climatologies, or long-term averages, of essential climate variables are useful for evaluating models and providing a baseline for studying anomalies. The Surface Ocean CO2 Atlas (SOCAT) has made millions of global underway sea surface measurements of CO2 publicly available, all in a uniform format and presented as fugacity, fCO2. As fCO2 is highly sensitive to temperature, the measurements are only valid for the instantaneous sea surface temperature (SST) that is measured concurrently with the in-water CO2 measurement. To create a climatology of fCO2 data suitable for calculating air-sea CO2 fluxes, it is therefore desirable to calculate fCO2 valid for a more consistent and averaged SST. This paper presents the OceanFlux Greenhouse Gases methodology for creating such a climatology. We recomputed SOCAT's fCO2 values for their respective measurement month and year using monthly composite SST data on a 1° × 1° grid from satellite Earth observation and then extrapolated the resulting fCO2 values to reference year 2010. The data were then spatially interpolated onto a 1° × 1° grid of the global oceans to produce 12 monthly fCO2 distributions for 2010, including the prediction errors of fCO2 produced by the spatial interpolation technique. The partial pressure of CO2 (pCO2) is also provided for those who prefer to use pCO2. The CO2 concentration difference between ocean and atmosphere is the thermodynamic driving force of the air-sea CO2 flux, and hence the presented fCO2 distributions can be used in air-sea gas flux calculations together with climatologies of other climate variables.

  18. Probabilistic reconstruction of GPS vertical ground motion and comparison with GIA models

    NASA Astrophysics Data System (ADS)

    Husson, Laurent; Bodin, Thomas; Choblet, Gael; Kreemer, Corné

    2017-04-01

    The vertical position time-series of GPS stations have become long enough for many parts of the world to infer modern rates of vertical ground motion. We use the worldwide compilation of GPS trend velocities of the Nevada Geodetic Laboratory. Those rates are inferred by applying the MIDAS algorithm (Blewitt et al., 2016) to time-series obtained from publicly available data from permanent stations. Because MIDAS filters out seasonality and discontinuities, regardless of their causes, it gives robust long-term rates of vertical ground motion (except where there is significant postseismic deformation). As the stations are unevenly distributed, and because data errors are also highly variable, sometimes to an unknown degree, we use a Bayesian inference method to reconstruct 2D maps of vertical ground motion. Our models are based on a Voronoi tessellation and self-adapt to the spatially variable level of information provided by the data. Instead of providing a unique interpolated surface, each point of the reconstructed surface is defined through a probability density function. We apply our method to a series of vast regions covering entire continents. Not surprisingly, the reconstructed surface at a long wavelength is dominated by the GIA. This result can be exploited to evaluate whether forward models of GIA reproduce geodetic rates within the uncertainties derived from our interpolation, not only at high latitudes where postglacial rebound is fast, but also in more temperate latitudes where, for instance, such rates may compete with modern sea level rise. At shorter wavelengths, the reconstructed surface of vertical ground motion features a variety of identifiable patterns, whose geometries and rates can be mapped. Examples are transient dynamic topography over the convecting mantle, actively deforming domains (mountain belts and active margins), volcanic areas, or anthropogenic contributions.

  19. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  20. Quantitative analysis and temperature-induced variations of moiré pattern in fiber-coupled imaging sensors.

    PubMed

    Karbasi, Salman; Arianpour, Ashkan; Motamedi, Nojan; Mellette, William M; Ford, Joseph E

    2015-06-10

    Imaging fiber bundles can map the curved image surface formed by some high-performance lenses onto flat focal plane detectors. The relative alignment between the focal plane array pixels and the quasi-periodic fiber-bundle cores can impose an undesirable space variant moiré pattern, but this effect may be greatly reduced by flat-field calibration, provided that the local responsivity is known. Here we demonstrate a stable metric for spatial analysis of the moiré pattern strength, and use it to quantify the effect of relative sensor and fiber-bundle pitch, and that of the Bayer color filter. We measure the thermal dependence of the moiré pattern, and the achievable improvement by flat-field calibration at different operating temperatures. We show that a flat-field calibration image at a desired operating temperature can be generated using linear interpolation between white images at several fixed temperatures, comparing the final image quality with an experimentally acquired image at the same temperature.

  1. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  2. Noise and drift analysis of non-equally spaced timing data

    NASA Technical Reports Server (NTRS)

    Vernotte, F.; Zalamansky, G.; Lantz, E.

    1994-01-01

    Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.

  3. Quantification of uncertainties of the tsunami risk in Cascadia

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Sarri, A.; Day, S. J.; Liu, X.; Dias, F.

    2013-12-01

    We first show new realistic simulations of earthquake-generated tsunamis in Cascadia (Western Canada and USA) using VOLNA. VOLNA is a solver of nonlinear shallow water equations on unstructured meshes that is accelerated on the new GPU system Emerald. Primary outputs from these runs are tsunami inundation maps, accompanied by site-specific wave trains and flow velocity histories. The variations in inputs (here seabed deformations due to earthquakes) are time-varying shapes difficult to sample, and they require an integrated statistical and geophysical analysis. Furthermore, the uncertainties in the bathymetry require extensive investigation and optimization of the resolutions at the source and impact. Thus we need to run VOLNA for well chosen combinations of the inputs and the bathymetry to reflect the various sources of uncertainties, and we interpolate in between using a so-called statistical emulator that keeps track of the additional uncertainties due to the interpolation itself. We present novel adaptive sequential designs that enable such choices of the combinations for our Gaussian Process (GP) based emulator in order to maximize the information from the limited number of runs of VOLNA that can be computed. GPs show strength in the approximation of the response surface but suffer from large computer costs associated with the fitting. Hence, a careful selection of the inputs is necessary to optimize the trade-off fit versus computations. Finally, we also propose to assess the frequencies and intensities of the earthquakes along the Cascadia subduction zone that have been demonstrated by geological palaeoseismic, palaeogeodetic and tsunami deposit studies in Cascadia. As a result, the hazard assessment aims to reflect the multiple non-linearities and uncertainties for the tsunami risk in Cascadia.

  4. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  5. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  6. Analysis of the numerical differentiation formulas of functions with large gradients

    NASA Astrophysics Data System (ADS)

    Tikhovskaya, S. V.

    2017-10-01

    The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.

  7. Effect of the precipitation interpolation method on the performance of a snowmelt runoff model

    NASA Astrophysics Data System (ADS)

    Jacquin, Alexandra

    2014-05-01

    Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method, HBV parameters are calibrated with respect to Nash-Sutcliffe efficiency. The performance of HBV in the study catchment is not satisfactory. Although volumetric errors are modest, efficiency values are lower than 70%. Discharge estimates resulting from the application of TP, MFF and IDW obtain similar model efficiencies and volumetric errors. These error statistics moderately improve if KED or OIM are used instead. Even though the quality of precipitation estimates of distinct interpolation methods is dissimilar, the results of this study show that these differences do not necessarily produce noticeable changes in HBV's model performance statistics. This situation arises because the calibration of the model parameters allows some degree of compensation of deficient areal precipitation estimates, mainly through the adjustment of model simulated evaporation and glacier melt, as revealed by the analysis of water balances. In general, even if there is a good agreement between model estimated and observed discharge, this information is not sufficient to assert that the internal hydrological processes of the catchment are properly simulated by a watershed model. Other calibration criteria should be incorporated if a more reliable representation of these processes is desired. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279. The HBV Light software used in this study was kindly provided by J. Seibert, Department of Geography, University of Zürich.

  8. Construction of a 3-arcsecond digital elevation model for the Gulf of Maine

    USGS Publications Warehouse

    Twomey, Erin R.; Signell, Richard P.

    2013-01-01

    A system-wide description of the seafloor topography is a basic requirement for most coastal oceanographic studies. The necessary detail of the topography obviously varies with application, but for many uses, a nominal resolution of roughly 100 m is sufficient. Creating a digital bathymetric grid with this level of resolution can be a complex procedure due to a multiplicity of data sources, data coverages, datums and interpolation procedures. This report documents the procedures used to construct a 3-arcsecond (approximately 90-meter grid cell size) digital elevation model for the Gulf of Maine (71°30' to 63° W, 39°30' to 46° N). We obtained elevation and bathymetric data from a variety of American and Canadian sources, converted all data to the North American Datum of 1983 for horizontal coordinates and the North American Vertical Datum of 1988 for vertical coordinates, used a combination of automatic and manual techniques for quality control, and interpolated gaps using a surface-fitting routine.

  9. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  10. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  11. A comparison of spatial interpolation methods for soil temperature over a complex topographical region

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin

    2016-08-01

    Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.

  12. Gaussian process regression to accelerate geometry optimizations relying on numerical differentiation

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Christiansen, Ove

    2018-06-01

    We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.

  13. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  14. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    NASA Astrophysics Data System (ADS)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  15. The Derivation Of A CO2 Fugacity Climatology From SOCAT's Global In SITU Data

    NASA Astrophysics Data System (ADS)

    Goddijn-Murphy, L. M.; Woolf, D. K.; Land, P. E.; Shutler, J. D.

    2013-12-01

    The Surface Ocean CO2 Atlas (SOCAT) has made millions of global underway sea surface measurements of CO2 publicly available, all in a uniform format and presented as fugacity, fCO2. However, these fCO2 values are valid strictly only for the instantaneous temperature at measurement and are not ideal for climatology. We recomputed these fCO2 values for the measurement month to be applicable to climatological sea surface temperatures, extrapolated to reference year 2010. The data were then spatially interpolated on a 1°×1° grid of the global oceans to produce 12 monthly fCO2 distributions. Our climatology data will be shared with the science community.

  16. Application of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.

    1990-01-01

    A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.

  17. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  18. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    NASA Astrophysics Data System (ADS)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  19. The US Navy Coupled Ocean-Wave Prediction System

    DTIC Science & Technology

    2014-09-01

    Stokes drift to be the dominant wave effect and that it increased surface drift speeds by 35% and veered the current in the direction of the wind...ocean model has been modified to incorporate the effect of the Stokes drift current, wave radiation stresses due to horizontal gradients of the momentum...for fourth-order differences for horizontal baroclinic pressure gradients and for interpolation of Coriolis terms. There is an option to use the

  20. Interpolation of the Radial Velocity Data from Coastal HF Radars

    DTIC Science & Technology

    2013-01-01

    practical applications and may help to solve many environmental problems caused by human activity. References [1] Alvera -Azcarate A., A. Barth, M. Rixen...surface temperature, Ocean Modelling, 9,325-346. [2] Alvera -Azcarate, A., A. Barth,. J.-M. Beckers, and R. H. Weisber, 2007: Multivari- ate...predictions from the global Navy Coastal Ocean Model (NCOM) dur- ing 1998-2001,7. Atmos. Oceanic TechnoL, 21(12), 1876-1894. [4] Barth, A., Alvera

  1. Wave Breaking Induced Surface Wakes and Jets Observed during a Bora Event

    DTIC Science & Technology

    2005-01-01

    terrain contours (interval = 200 m) superposed. The approximate NCAR Electra and NOAA P-3 flight tracks are indicated by bold and dotted straight lines ...Hz data. The red curves correspond to the COAMPS simulated fields obtained by interpolating the 1-km grid data to the straight line through the...Alpine Experiment (ALPEX) in 1982 [Smith, 1987]. These studies suggested that the bora flow shares some common characteristics with downslope windstorms

  2. 3-D Characterization of Seismic Properties at the Smart Weapons Test Range, YPG

    DTIC Science & Technology

    2001-10-01

    confidence limits around each interpolated value. Ground truth was accomplished through cross-hole seismic measurements and borehole logs. Surface wave... seismic method, as well as estimating the optimal orientation and spacing of the seismic array . A variety of sources and receivers was evaluated...location within the array is partially related to at least two seismic lines. Either through good fortune or foresight by the designers of the SWTR site

  3. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  4. Graphics and Flow Visualization of Computer Generated Flow Fields

    NASA Technical Reports Server (NTRS)

    Kathong, M.; Tiwari, S. N.

    1987-01-01

    Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.

  5. High-resolution daily gridded datasets of air temperature and wind speed for Europe

    NASA Astrophysics Data System (ADS)

    Brinckmann, S.; Krähenmann, S.; Bissolli, P.

    2015-08-01

    New high-resolution datasets for near surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are hourly SYNOP observations, partly supplemented by station data from the ECA&D dataset (http://www.ecad.eu). These data are quality tested to eliminate erroneous data and various kinds of inhomogeneities. Grids in a resolution of 0.044° (5 km) are derived by spatial interpolation of these station data into the CORDEX area. For temperature interpolation a modified version of a regression kriging method developed by Krähenmann et al. (2011) is used. At first, predictor fields of altitude, continentality and zonal mean temperature are chosen for a regression applied to monthly station data. The residuals of the monthly regression and the deviations of the daily data from the monthly averages are interpolated using simple kriging in a second and third step. For wind speed a new method based on the concept used for temperature was developed, involving predictor fields of exposure, roughness length, coastal distance and ERA Interim reanalysis wind speed at 850 hPa. Interpolation uncertainty is estimated by means of the kriging variance and regression uncertainties. Furthermore, to assess the quality of the final daily grid data, cross validation is performed. Explained variance ranges from 70 to 90 % for monthly temperature and from 50 to 60 % for monthly wind speed. The resulting RMSE for the final daily grid data amounts to 1-2 °C and 1-1.5 m s-1 (depending on season and parameter) for daily temperature parameters and daily mean wind speed, respectively. The datasets presented in this article are published at http://dx.doi.org/10.5676/DWD_CDC/DECREG0110v1.

  6. Applications of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.

    1990-01-01

    A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.

  7. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.

  8. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  9. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  10. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  11. The impact of conventional surface data upon VAS regression retrievals in the lower troposphere

    NASA Technical Reports Server (NTRS)

    Lee, T. H.; Chesters, D.; Mostek, A.

    1983-01-01

    Surface temperature and dewpoint reports are added to the infrared radiances from the VISSR Atmospheric Sounder (VAS) in order to improve the retrieval of temperature and moisture profiles in the lower troposphere. The conventional (airways) surface data are combined with the twelve VAS channels as additional predictors in a ridge regression retrieval scheme, with the aim of using all available data to make high resolution space-time interpolations of the radiosonde network. For one day of VAS observations, retrievals using only VAS radiances are compared with retrievals using VAS radiances plus surface data. Temperature retrieval accuracy evaluated at coincident radiosonde sites shows a significant impact within the boundary layer. Dewpoint retrieval accuracy shows a broader improvement within the lowest tropospheric layers. The most dramatic impact of surface data is observed in the improved relative spatial and temporal continuity of low-level fields retrieved over the Midwestern United States.

  12. Factors controlling stream water nitrate and phosphor loads during precipitation events

    NASA Astrophysics Data System (ADS)

    Rozemeijer, J.; van der Velde, Y.; van Geer, F.; de Rooij, G. H.; Broers, H.; Bierkens, M. F.

    2009-12-01

    Pollution of surface waters in densely populated areas with intensive land use is a serious threat to their ecological, industrial and recreational utilization. European and national manure policies and several regional and local pilot projects aim at reducing pollution loads to surface waters. For the evaluation of measures, water authorities and environmental research institutes are putting a lot of effort into monitoring surface water quality. Within regional surface water quality monitoring networks, the measurement locations are usually situated in the downstream part of the catchment to represent a larger area. The monitoring frequency is usually low (e.g. monthly), due to the high costs for sampling and analysis. As a consequence, human induced trends in nutrient loads and concentrations in these monitoring data are often concealed by the large variability of surface water quality caused by meteorological variations. Because this natural variability in surface water quality is poorly understood, large uncertainties occur in the estimates of (trends in) nutrient loads or average concentrations. This study aims at uncertainty reduction in the estimates of mean concentrations and loads of N and P from regional monitoring data. For this purpose, we related continuous records of stream water N and P concentrations to easier and cheaper to collect quantitative data on precipitation, discharge, groundwater level and tube drain discharge. A specially designed multi scale experimental setup was installed in an agricultural lowland catchment in The Netherlands. At the catchment outlet, continuous measurements of water quality and discharge were performed from July 2007-January 2009. At an experimental field within the catchment we collected continuous measurements of precipitation, groundwater levels and tube drain discharges. 20 significant rainfall events with a variety of antecedent conditions, durations and intensities were selected for analysis. Singular and multiple regression analysis were used to identify relations between the N and P response to the rainfall events and the quantitative event characteristics. We successfully used these relations to predict the N and P responses to events and to improve the interpolation between low frequency grab sample measurements. Incorporating the predicted concentration changes during high discharge events dramatically improved the precision of our load estimations.

  13. Impacts of different rainfall patterns on hyporheic zone under transient conditions

    NASA Astrophysics Data System (ADS)

    Liu, Suning; Chui, Ting Fong May

    2018-06-01

    The hyporheic zone (HZ) plays an important role in stream ecology. Previous studies have mainly focused on the factors influencing the HZ in the steady state. However, the exchange between surface water and groundwater in the HZ can become transient during a storm. This study investigates the impacts of different rainfall patterns (varying in intensity and duration) on the HZ under transient conditions. A two-dimensional numerical model of a 10-m long and 2-m deep domain is developed, in which the streambed consists of a series of dunes. Brinkman-Darcy and Navier-Stokes equations are respectively solved for groundwater and surface water, and velocity and pressure are coupled at the interface (i.e., the streambed surface). To compare the results under different transient conditions, this study proposes two indicators, i.e., the influential time (IT, the time required for the HZ to return to its initial state once it starts to change) and the influential depth (ID, the maximum increment in the HZ depth). To detect the extent to which the HZ undergoes significant spatial changes, moving split-window and inflection point tests are conducted. The results indicate that rainfall intensity (RI) and rainfall duration (RD) both display logarithmic relationships with the IT and ID with high coefficients of determination, but only between certain lower and upper thresholds of the RI and RD. Moreover, the distributions of the IT and ID as a function of the RI and RD are mapped using the surface spline and kriging interpolation methods to facilitate future prediction of the IT and ID. In addition, it is observed that the IT has a linear negative correlation with the groundwater response while the ID is not affected by different groundwater responses. All of the derived relationships can be used to predict the impacts of a future rainfall event on the HZ.

  14. Object Interpolation in Three Dimensions

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.

    2005-01-01

    Perception of objects in ordinary scenes requires interpolation processes connecting visible areas across spatial gaps. Most research has focused on 2-D displays, and models have been based on 2-D, orientation-sensitive units. The authors present a view of interpolation processes as intrinsically 3-D and producing representations of contours and…

  15. Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.

    PubMed

    Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik

    2007-01-01

    In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.

  16. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  17. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  18. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  19. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  20. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  1. Hyperbolic and semi-hyperbolic surface codes for quantum storage

    NASA Astrophysics Data System (ADS)

    Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.

    2017-09-01

    We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.

  2. Comparison of global sst analyses for atmospheric data assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phoebus, P.A.; Cummings, J.A.

    1995-03-17

    Traditionally, atmospheric models were executed using a climatological estimate of the sea surface temperature (SST) to define the marine boundary layer. More recently, particularly since the deployment of remote sensing instruments and the advent of multichannel SST observations atmospheric models have been improved by using more timely estimates of the actual state of the ocean. Typically, some type of objective analysis is performed using the data from satellites along with ship, buoy, and bathythermograph observations, and perhaps even climatology, to produce a weekly or daily analysis of global SST. Some of the earlier efforts to produce real-time global temperature analysesmore » have been described by Clancy and Pollak (1983) and Reynolds (1988). However, just as new techniques have been developed for atmospheric data assimilation, improvements have been made to ocean data assimilation systems as well. In 1988, the U.S. Navy`s Fleet Numerical Meteorology and Oceanography Center (FNMOC) implemented a global three-dimensional ocean temperature analysis that was based on the optimum interpolation methodology (Clancy et al., 1990). This system, the Optimum Thermal Interpolation System (OTIS 1.0), was initially distributed on a 2.50 resolution grid, and was later modified to generate fields on a 1.250 grid (OTIS 1.1; Clancy et al., 1992). Other optimum interpolation-based analyses (OTIS 3.0) were developed by FNMOC to perform high-resolution three-dimensional ocean thermal analyses in areas with strong frontal gradients and clearly defined water mass characteristics.« less

  3. 3D Thermal Stratification of Koycegiz Lake, Turkey.

    NASA Astrophysics Data System (ADS)

    Gurcan, Tugba; Kurtulus, Bedri; Avsar, Ozgur; Avsar, Ulas

    2017-04-01

    Water temperature in lakes, streams and coastal areas is an important indicator for several purposes (water quality, aquatic organism, land use, etc..). There are over a hundred lakes in Turkey. Most of them locates in the area known as the Lake District in southwestern Turkey. The Study area is located at the south and southwest part of Turkey in Muǧla region. The present study focuses on determining possible thermocline changes in Lake Koyceǧiz by in-situ measurements. The measurement were done by two snapshot campaign at July and August 2013. Using Mugla Sıtkı Kocman University geological engineering floating platform, temperature, specific conductance, salinity and depth values were measured with the YSI 6600 and Horiba U2 devices in surface and depth of Lake Köyceǧiz at specific grid. When the depth of the water and the coordinates were measured by GPS. Scattered data interpolation is used to perform interpolation on a scattered dataset that resides in 3D space. The 3D temperature color mesh grid were generated by using Delaunay triangulation and Natural neighbor interpolation methodology. At the end of the study a 3D conceptual lake temperature dynamics model was reconstructed using MATLAB functions. The results show that Koycegiz Lake is a meromictic lake and has a significance decrease of Temperature at 7m of depth.In this regard, we would like also to thank TUBITAK project (112Y137), French Embassy in Turkey and Sıtkı Kocman Foundation for their financial support.

  4. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  5. Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Campbell, L.; Purviance, J.

    1992-01-01

    A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.

  6. Catmull-Rom Curve Fitting and Interpolation Equations

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2010-01-01

    Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…

  7. High-Fidelity Real-Time Trajectory Optimization for Reusable Launch Vehicles

    DTIC Science & Technology

    2006-12-01

    6.20 Max DR Yawing Moment History. ...............................................................270 Figure 6.21 Snapshot from MATLAB “Profile...Propagation using “ode45” (Euler Angles)...........................................330 Figure 6.114 Interpolated Elevon Controls using Various MATLAB ...Schemes.................332 Figure 6.115 Interpolated Flap Controls using Various MATLAB Schemes.....................333 Figure 6.116 Interpolated

  8. Visualizing and Understanding the Components of Lagrange and Newton Interpolation

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…

  9. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  10. A novel interpolation approach for the generation of 3D-geometric digital bone models from image stacks

    PubMed Central

    Mittag, U.; Kriechbaumer, A.; Rittweger, J.

    2017-01-01

    The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415

  11. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  12. INTERPOL's Surveillance Network in Curbing Transnational Terrorism

    PubMed Central

    Gardeazabal, Javier; Sandler, Todd

    2015-01-01

    Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.

  13. New Methods for Air Quality Model Evaluation with Satellite Data

    NASA Astrophysics Data System (ADS)

    Holloway, T.; Harkey, M.

    2015-12-01

    Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI HCHO (NASA retrieval); MOPITT CO (NASA retrieval); MODIS AOD (NASA retrieval). More information at http://nelson.wisc.edu/sage/data-and-models/software.php.

  14. Rotationally Symmetric Operators for Surface Interpolation

    DTIC Science & Technology

    1981-11-01

    Computational Geometry for design and rianufacture , Fills Horwood, Chichester UK, 1979. [111 Gladwell 1. and Wait. R. (eds.). Survey of numerical...from an image," Computer Graphics and Image Processing 3(1974), 277-299. 1161 Horn B. K. P. "The curve of least energy," MIT, Al Memo 610, 1981. 117...an object from a single view," Artificial Intelligence 17 (1981), 409-460. [21] Knuth 1). E. "Mathematical typography ," Bull. Amer. Math. Soc. (new

  15. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  16. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  17. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  18. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  19. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less

  20. Shape Control in Multivariate Barycentric Rational Interpolation

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa Thang; Cuyt, Annie; Celis, Oliver Salazar

    2010-09-01

    The most stable formula for a rational interpolant for use on a finite interval is the barycentric form [1, 2]. A simple choice of the barycentric weights ensures the absence of (unwanted) poles on the real line [3]. In [4] we indicate that a more refined choice of the weights in barycentric rational interpolation can guarantee comonotonicity and coconvexity of the rational interpolant in addition to a polefree region of interest. In this presentation we generalize the above to the multivariate case. We use a product-like form of univariate barycentric rational interpolants and indicate how the location of the poles and the shape of the function can be controlled. This functionality is of importance in the construction of mathematical models that need to express a certain trend, such as in probability distributions, economics, population dynamics, tumor growth models etc.

  1. Pre-inverted SESAME data table construction enhancements to correct unexpected inverse interpolation pathologies in EOSPAC 6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pimentel, David A.; Sheppard, Daniel G.

    It was recently demonstrated that EOSPAC 6 continued to incorrectly create and interpolate pre-inverted SESAME data tables after the release of version 6.3.2beta.2. Significant interpolation pathologies were discovered to occur when EOSPAC 6's host software enabled pre-inversion with the EOS_INVERT_AT_SETUP option. This document describes a solution that uses data transformations found in EOSPAC 5 and its predecessors. The numerical results and performance characteristics of both the default and pre-inverted interpolation modes in both EOSPAC 6.3.2beta.2 and the fixed logic of EOSPAC 6.4.0beta.1 are presented herein, and the latter software release is shown to produce significantly-improved numerical results for the pre-invertedmore » interpolation mode.« less

  2. Illumination estimation via thin-plate spline interpolation.

    PubMed

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  3. "The Effect of Alternative Representations of Lake ...

    EPA Pesticide Factsheets

    Lakes can play a significant role in regional climate, modulating inland extremes in temperature and enhancing precipitation. Representing these effects becomes more important as regional climate modeling (RCM) efforts focus on simulating smaller scales. When using the Weather Research and Forecasting (WRF) model to downscale future global climate model (GCM) projections into RCM simulations, model users typically must rely on the GCM to represent temperatures at all water points. However, GCMs have insufficient resolution to adequately represent even large inland lakes, such as the Great Lakes. Some interpolation methods, such as setting lake surface temperatures (LSTs) equal to the nearest water point, can result in inland lake temperatures being set from sea surface temperatures (SSTs) that are hundreds of km away. In other cases, a single point is tasked with representing multiple large, heterogeneous lakes. Similar consequences can result from interpolating ice from GCMs to inland lake points, resulting in lakes as large as Lake Superior freezing completely in the space of a single timestep. The use of a computationally-efficient inland lake model can improve RCM simulations where the input data is too coarse to adequately represent inland lake temperatures and ice (Gula and Peltier 2012). This study examines three scenarios under which ice and LSTs can be set within the WRF model when applied as an RCM to produce 2-year simulations at 12 km gri

  4. Integrating Map Algebra and Statistical Modeling for Spatio- Temporal Analysis of Monthly Mean Daily Incident Photosynthetically Active Radiation (PAR) over a Complex Terrain.

    PubMed

    Evrendilek, Fatih

    2007-12-12

    This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.

  5. Enhancing Groundwater Cost Estimation with the Interpolation of Water Tables across the United States

    NASA Astrophysics Data System (ADS)

    Rosli, A. U. M.; Lall, U.; Josset, L.; Rising, J. A.; Russo, T. A.; Eisenhart, T.

    2017-12-01

    Analyzing the trends in water use and supply across the United States is fundamental to efforts in ensuring water sustainability. As part of this, estimating the costs of producing or obtaining water (water extraction) and the correlation with water use is an important aspect in understanding the underlying trends. This study estimates groundwater costs by interpolating the depth to water level across the US in each county. We use Ordinary and Universal Kriging, accounting for the differences between aquifers. Kriging generates a best linear unbiased estimate at each location and has been widely used to map ground-water surfaces (Alley, 1993).The spatial covariates included in the universal Kriging were land-surface elevation as well as aquifer information. The average water table is computed for each county using block kriging to obtain a national map of groundwater cost, which we compare with survey estimates of depth to the water table performed by the USDA. Groundwater extraction costs were then assumed to be proportional to water table depth. Beyond estimating the water cost, the approach can provide an indication of groundwater-stress by exploring the historical evolution of depth to the water table using time series information between 1960 and 2015. Despite data limitations, we hope to enable a more compelling and meaningful national-level analysis through the quantification of cost and stress for more economically efficient water management.

  6. Interpolating moving least-squares methods for fitting potential energy surfaces: using classical trajectories to explore configuration space.

    PubMed

    Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L

    2009-04-14

    We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.

  7. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and ordinary kriging. Overall, explanatory variables improve the interpolation results.

  8. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  9. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management

    NASA Astrophysics Data System (ADS)

    Yuval; Rimon, Y.; Graber, E. R.; Furman, A.

    2013-07-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli shoreline.

  10. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management.

    PubMed

    Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex

    2014-08-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli shoreline. The implications for aquifer management are discussed.

  11. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    NASA Astrophysics Data System (ADS)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.

  12. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  13. [Improvement of Digital Capsule Endoscopy System and Image Interpolation].

    PubMed

    Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai

    2016-01-01

    Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation

  14. Automatic feature-based grouping during multiple object tracking.

    PubMed

    Erlikhman, Gennady; Keane, Brian P; Mettler, Everett; Horowitz, Todd S; Kellman, Philip J

    2013-12-01

    Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation, and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. We found that intertarget grouping improved performance for all feature types except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were, at times, large (>15% decrement in accuracy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2), and relative to a "diversity" condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking.

  15. An ab initio global potential-energy surface for NH2(A(2)A') and vibrational spectrum of the Renner-Teller A(2)A'-X(2)A" system.

    PubMed

    Zhou, Shulan; Li, Zheng; Xie, Daiqian; Lin, Shi Ying; Guo, Hua

    2009-05-14

    A global potential-energy surface for the first excited electronic state of NH(2)(A(2)A(')) has been constructed by three-dimensional cubic spline interpolation of more than 20,000 ab initio points, which were calculated at the multireference configuration-interaction level with the Davidson correction using the augmented correlation-consistent polarized valence quadruple-zeta basis set. The (J=0) vibrational energy levels for the ground (X(2)A(")) and excited (A(2)A(')) electronic states of NH(2) were calculated on our potential-energy surfaces with the diagonal Renner-Teller terms. The results show a good agreement with the experimental vibrational frequencies of NH(2) and its isotopomers.

  16. Is Interpolation Cognitively Encapsulated? Measuring the Effects of Belief on Kanizsa Shape Discrimination and Illusory Contour Formation

    ERIC Educational Resources Information Center

    Keane, Brian P.; Lu, Hongjing; Papathomas, Thomas V.; Silverstein, Steven M.; Kellman, Philip J.

    2012-01-01

    Contour interpolation is a perceptual process that fills-in missing edges on the basis of how surrounding edges (inducers) are spatiotemporally related. Cognitive encapsulation refers to the degree to which perceptual mechanisms act in isolation from beliefs, expectations, and utilities (Pylyshyn, 1999). Is interpolation encapsulated from belief?…

  17. Response Surface Methods For Spatially-Resolved Optical Measurement Techniques

    NASA Technical Reports Server (NTRS)

    Danehy, P. M.; Dorrington, A. A.; Cutler, A. D.; DeLoach, R.

    2003-01-01

    Response surface methods (or methodology), RSM, have been applied to improve data quality for two vastly different spatially-resolved optical measurement techniques. In the first application, modern design of experiments (MDOE) methods, including RSM, are employed to map the temperature field in a direct-connect supersonic combustion test facility at NASA Langley Research Center. The laser-based measurement technique known as coherent anti-Stokes Raman spectroscopy (CARS) is used to measure temperature at various locations in the combustor. RSM is then used to develop temperature maps of the flow. Even though the temperature fluctuations at a single point in the flowfield have a standard deviation on the order of 300 K, RSM provides analytic fits to the data having 95% confidence interval half width uncertainties in the fit as low as +/- 30 K. Methods of optimizing future CARS experiments are explored. The second application of RSM is to quantify the shape of a 5-meter diameter, ultra-lightweight, inflatable space antenna at NASA Langley Research Center. Photogrammetry is used to simultaneously measure the shape of the antenna at approximately 500 discrete spatial locations. RSM allows an analytic model to be developed that describes the shape of the majority of the antenna with an uncertainty of 0.4 mm, with 95% confidence. This model would allow a quantitative comparison between the actual shape of the antenna and the original design shape. Accurately determining this shape also allows confident interpolation between the measured points. Such a model could, for example, be used for ray tracing of radio-frequency waves up to 95 GHz. to predict the performance of the antenna.

  18. A subdivision-based parametric deformable model for surface extraction and statistical shape modeling of the knee cartilages

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2006-03-01

    Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.

  19. Global boundary flattening transforms for acoustic propagation under rough sea surfaces.

    PubMed

    Oba, Roger M

    2010-07-01

    This paper introduces a conformal transform of an acoustic domain under a one-dimensional, rough sea surface onto a domain with a flat top. This non-perturbative transform can include many hundreds of wavelengths of the surface variation. The resulting two-dimensional, flat-topped domain allows direct application of any existing, acoustic propagation model of the Helmholtz or wave equation using transformed sound speeds. Such a transform-model combination applies where the surface particle velocity is much slower than sound speed, such that the boundary motion can be neglected. Once the acoustic field is computed, the bijective (one-to-one and onto) mapping permits the field interpolation in terms of the original coordinates. The Bergstrom method for inverse Riemann maps determines the transform by iterated solution of an integral equation for a surface matching term. Rough sea surface forward scatter test cases provide verification of the method using a particular parabolic equation model of the Helmholtz equation.

  20. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    PubMed

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  1. On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids

    NASA Astrophysics Data System (ADS)

    Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.

    2017-03-01

    The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.

  2. A general tool for the evaluation of spiral CT interpolation algorithms: revisiting the effect of pitch in multislice CT.

    PubMed

    Bricault, Ivan; Ferretti, Gilbert

    2005-01-01

    While multislice spiral computed tomography (CT) scanners are provided by all major manufacturers, their specific interpolation algorithms have been rarely evaluated. Because the results published so far relate to distinct particular cases and differ significantly, there are contradictory recommendations about the choice of pitch in clinical practice. In this paper, we present a new tool for the evaluation of multislice spiral CT z-interpolation algorithms, and apply it to the four-slice case. Our software is based on the computation of a "Weighted Radiation Profile" (WRP), and compares WRP to an expected ideal profile in terms of widening and heterogeneity. It provides a unique scheme for analyzing a large variety of spiral CT acquisition procedures. Freely chosen parameters include: number of detector rows, detector collimation, nominal slice width, helical pitch, and interpolation algorithm with any filter shape and width. Moreover, it is possible to study any longitudinal and off-isocenter positions. Theoretical and experimental results show that WRP, more than Slice Sensitivity Profile (SSP), provides a comprehensive characterization of interpolation algorithms. WRP analysis demonstrates that commonly "preferred helical pitches" are actually nonoptimal regarding the formerly distinguished z-sampling gap reduction criterion. It is also shown that "narrow filter" interpolation algorithms do not enable a general preferred pitch discussion, since they present poor properties with large longitudinal and off-center variations. In the more stable case of "wide filter" interpolation algorithms, SSP width or WRP widening are shown to be almost constant. Therefore, optimal properties should no longer be sought in terms of these criteria. On the contrary, WRP heterogeneity is related to variable artifact phenomena and can pertinently characterize optimal pitches. In particular, the exemplary interpolation properties of pitch = 1 "wide filter" mode are demonstrated.

  3. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  4. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  5. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  6. High Resolution Surface Geometry and Albedo by Combining Laser Altimetry and Visible Images

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; vonToussaint, Udo; Cheeseman, Peter C.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The need for accurate geometric and radiometric information over large areas has become increasingly important. Laser altimetry is one of the key technologies for obtaining this geometric information. However, there are important application areas where the observing platform has its orbit constrained by the other instruments it is carrying, and so the spatial resolution that can be recorded by the laser altimeter is limited. In this paper we show how information recorded by one of the other instruments commonly carried, a high-resolution imaging camera, can be combined with the laser altimeter measurements to give a high resolution estimate both of the surface geometry and its reflectance properties. This estimate has an accuracy unavailable from other interpolation methods. We present the results from combining synthetic laser altimeter measurements on a coarse grid with images generated from a surface model to re-create the surface model.

  7. Multisensor satellite data integration for sea surface wind speed and direction determination

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Pihos, G. G.; Wheelock, S. L.

    1984-01-01

    Techniques to integrate meteorological data from various satellite sensors to yield a global measure of sea surface wind speed and direction for input to the Navy's operational weather forecast models were investigated. The sensors were launched or will be launched, specifically the GOES visible and infrared imaging sensor, the Nimbus-7 SMMR, and the DMSP SSM/I instrument. An algorithm for the extrapolation to the sea surface of wind directions as derived from successive GOES cloud images was developed. This wind veering algorithm is relatively simple, accounts for the major physical variables, and seems to represent the best solution that can be found with existing data. An algorithm for the interpolation of the scattered observed data to a common geographical grid was implemented. The algorithm is based on a combination of inverse distance weighting and trend surface fitting, and is suited to combing wind data from disparate sources.

  8. Countering Terrorist Financing: A Case Study of the Kurdistan Workers Party (PKK)

    DTIC Science & Technology

    2014-12-01

    category of Improving Global Anti - Money Laundering /Countering Terrorist Financing Compliance.197 Turkey’s further countermeasures would serve both to...directives in 1991, 2001, and 2005. The first two directives brought responsibilities to financial institutions about anti - money - laundering and required...as the PKK’s money laundering center. Furthermore, the International Criminal Police Organization (Interpol) identified the PKK’s involvement in

  9. Fluid-Rock Characterization and Interactions in NMR Well Logging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George J. Hirasaki; Kishore K. Mohanty

    2005-09-05

    The objective of this report is to characterize the fluid properties and fluid-rock interactions that are needed for formation evaluation by NMR well logging. The advances made in the understanding of NMR fluid properties are summarized in a chapter written for an AAPG book on NMR well logging. This includes live oils, viscous oils, natural gas mixtures, and the relation between relaxation time and diffusivity. Oil based drilling fluids can have an adverse effect on NMR well logging if it alters the wettability of the formation. The effect of various surfactants on wettability and surface relaxivity are evaluated for silicamore » sand. The relation between the relaxation time and diffusivity distinguishes the response of brine, oil, and gas in a NMR well log. A new NMR pulse sequence in the presence of a field gradient and a new inversion technique enables the T{sub 2} and diffusivity distributions to be displayed as a two-dimensional map. The objectives of pore morphology and rock characterization are to identify vug connectivity by using X-ray CT scan, and to improve NMR permeability correlation. Improved estimation of permeability from NMR response is possible by using estimated tortuosity as a parameter to interpolate between two existing permeability models.« less

  10. Emission shaping in fluorescent proteins: role of electrostatics and π-stacking.

    PubMed

    Park, Jae Woo; Rhee, Young Min

    2016-02-07

    For many decades, simulating the excited state properties of complex systems has been an intriguing but daunting task due to its high computational cost. Here, we apply molecular dynamics based techniques with interpolated potential energy surfaces toward calculating fluorescence spectra of the green fluorescent protein (GFP) and its variants in a statistically meaningful manner. With the GFP, we show that the diverse electrostatic tuning can shape the emission features in many different ways. By computationally modulating the electrostatic interactions between the chromophore phenoxy oxygen and its nearby residues, we demonstrate that we indeed can shift the emission to the blue or to the red side in a predictable manner. We rationalize the shifting effects of individual residues in the GFP based on the responses of both the adiabatic and the diabatic electronic states of the chromophore. We next exhibit that the yellow emitting variant, the Thr203Tyr mutant, generates changes in the electrostatic interactions and an additional π-stacking interaction. These combined effects indeed induce a red shift to emit the fluorescence into the yellow region. With the series of demonstrations, we suggest that our approach can provide sound rationales and useful insights in understanding different responses of various fluorescent complexes, which may be helpful in designing new light emitting proteins and other related systems in future studies.

  11. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  12. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  13. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  14. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  15. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    PubMed

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  16. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  17. GPU color space conversion

    NASA Astrophysics Data System (ADS)

    Chase, Patrick; Vondran, Gary

    2011-01-01

    Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.

  18. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  19. Computation of viscous transonic flow about a lifting airfoil

    NASA Technical Reports Server (NTRS)

    Walitt, L.; Liu, C. Y.

    1976-01-01

    The viscous transonic flow about a stationary body in free air was numerically investigated. The geometry chosen was a symmetric NACA 64A010 airfoil at a freestream Mach number of 0.8, a Reynolds number of 4 million based on chord, and angles of attack of 0 and 2 degrees. These conditions were such that, at 2 degrees incidence unsteady periodic motion was calculated along the aft portion of the airfoil and in its wake. Although no unsteady measurements were made for the NACA 64A010 airfoil at these flow conditions, interpolated steady measurements of lift, drag, and surface static pressures compared favorably with corresponding computed time-averaged lift, drag, and surface static pressures.

  20. A Near-Surface Burst EMP Driver Package for Neutron-Induced Sources.

    DTIC Science & Technology

    1980-09-01

    FAa’S FD-F C SAwSB s3.sc 6O TO 10 C BEGIN NODIFIED QUADRATIC INTERPOLATION FOR NINlINUff 100 CONT INUE ICBIC.1 IFlIC.GTeICOUNT) 10 TO 110 YENP aISC-Sl) lSD ...M.I.T. LINCOLN LABORATORY Ro&.D ASSOCIATED P.O. DOE 369 P.O. BOX 73 P.O. 101x 9695 KttN F. A. SHAW AM LEONA WUoNLIN ATMN S- CLAY ROGERS CLEAIEILD. Ur

  1. Approximating basins of attraction for dynamical systems via stable radial bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavoretto, R.; De Rossi, A.; Perracchione, E.

    2016-06-08

    In applied sciences it is often required to model and supervise temporal evolution of populations via dynamical systems. In this paper, we focus on the problem of approximating the basins of attraction of such models for each stable equilibrium point. We propose to reconstruct the basins via an implicit interpolant using stable radial bases, obtaining the surfaces by partitioning the phase space into disjoint regions. An application to a competition model presenting jointly three stable equilibria is considered.

  2. High Temperature Surface Interactions

    DTIC Science & Technology

    1989-11-01

    and importance. To predict the parabolic scaling rate for a pure metal forming a dense, adherent one-phase scale, Wagner (1) needed only to assume that...coefficient for the acidic Ca(V03) 4 solute. Thus, limited measurements of CeO2 in the 0.7 Na2SO4-0.3NaVO3 solution can be interpolated to predict the...solutions can be predicted , as shown in Fig. 13. Obviously, these assumptions about the solution thermodynamics should receive experimental testing. In any

  3. Effects of Stencil Width on Surface Ocean Geostrophic Velocity and Vorticity Estimation from Gridded Satellite Altimeter Data

    DTIC Science & Technology

    2012-03-17

    Texas at Austin, Austin, Texas, USA. g dq ’Departement de Physique and LPO, Universite de Bretagne V _ /" r5r’ Occidental, Brest ...grid points are used in the calculation, so that the grid spacing is 8 times larger than on the original grid. The 3-point stencil differences are sig...that the difference between narrow and wide stencil estimates increases over that found on the original higher resolution grid. Interpolation of the

  4. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  5. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  6. Interpolated twitches in fatiguing single mouse muscle fibres: implications for the assessment of central fatigue

    PubMed Central

    Place, Nicolas; Yamada, Takashi; Bruton, Joseph D; Westerblad, Håkan

    2008-01-01

    An electrically evoked twitch during a maximal voluntary contraction (twitch interpolation) is frequently used to assess central fatigue. In this study we used intact single muscle fibres to determine if intramuscular mechanisms could affect the force increase with the twitch interpolation technique. Intact single fibres from flexor digitorum brevis of NMRI mice were dissected and mounted in a chamber equipped with a force transducer. Free myoplasmic [Ca2+] ([Ca2+]i) was measured with the fluorescent Ca2+ indicator indo-1. Seven fibres were fatigued with repeated 70 Hz tetani until 40% initial force with an interpolated pulse evoked every fifth tetanus. Results showed that the force generated by the interpolated twitch increased throughout fatigue, being 9 ± 1% of tetanic force at the start and 19 ± 1% at the end (P < 0.001). This was not due to a larger increase in [Ca2+]i induced by the interpolated twitch during fatigue but rather to the fact that the force–[Ca2+]i relationship is sigmoidal and fibres entered a steeper part of the relationship during fatigue. In another set of experiments, we observed that repeated tetani evoked at 150 Hz resulted in more rapid fatigue development than at 70 Hz and there was a decrease in force (‘sag’) during contractions, which was not observed at 70 Hz. In conclusion, the extent of central fatigue is difficult to assess and it may be overestimated when using the twitch interpolation technique. PMID:18403421

  7. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  8. Surface models of the male urogenital organs built from the Visible Korean using popular software

    PubMed Central

    Shin, Dong Sun; Park, Jin Seo; Shin, Byeong-Seok

    2011-01-01

    Unlike volume models, surface models, which are empty three-dimensional images, have a small file size, so they can be displayed, rotated, and modified in real time. Thus, surface models of male urogenital organs can be effectively applied to an interactive computer simulation and contribute to the clinical practice of urologists. To create high-quality surface models, the urogenital organs and other neighboring structures were outlined in 464 sectioned images of the Visible Korean male using Adobe Photoshop; the outlines were interpolated on Discreet Combustion; then an almost automatic volume reconstruction followed by surface reconstruction was performed on 3D-DOCTOR. The surface models were refined and assembled in their proper positions on Maya, and a surface model was coated with actual surface texture acquired from the volume model of the structure on specially programmed software. In total, 95 surface models were prepared, particularly complete models of the urinary and genital tracts. These surface models will be distributed to encourage other investigators to develop various kinds of medical training simulations. Increasingly automated surface reconstruction technology using commercial software will enable other researchers to produce their own surface models more effectively. PMID:21829759

  9. On computational experiments in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.

  10. Numerical solution of the Hele-Shaw equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, N.

    1987-04-01

    An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less

  11. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  12. Level 4 Global and European Chl-a Daily Analyses for End Users and Data Assimilation in the Frame of the Copernicus-Marine Environment Monitoring Service

    NASA Astrophysics Data System (ADS)

    Saulquin, Bertrand; Gohin, Francis; Garnesson, Philippe; Demaria, Julien; Mangin, Antoine; Fanton d'Andon, Odile

    2016-08-01

    The level-4 daily chl-a products are a combination of a water typed merge of chl-a estimates and an optimal interpolation based on the kriging method with regional anisotropic models [1, 2]. The Level 4 products basically pro- vide a global continuous (cloud free) estimation of the surface chl-a concentration at 4 km resolution over the world and 1 km resolution over the Europe. The level-4 products gather MODIS, MERIS, SeaWiFS, VIIRS and OLCI daily observations from 1998 to now.The Level 4 product avoids end users to consider typical lack of data as observed during cloudy conditions and the historical multiplicity of available algorithms such as involved by case 1 (oligotrophic) and case 2 (turbid) water issues in ocean colour. [3, 4].A total product uncertainty, i.e. a combination of the interpolation and the estimation error, is provided for each daily product. The L4 products are freely distributed in the frame of the Copernicus - Marine environment monitoring service.

  13. Visualizing 3-D microscopic specimens

    NASA Astrophysics Data System (ADS)

    Forsgren, Per-Ola; Majlof, Lars L.

    1992-06-01

    The confocal microscope can be used in a vast number of fields and applications to gather more information than is possible with a regular light microscope, in particular about depth. Compared to other three-dimensional imaging devices such as CAT, NMR, and PET, the variations of the objects studied are larger and not known from macroscopic dissections. It is therefore important to have several complementary ways of displaying the gathered information. We present a system where the user can choose display techniques such as extended focus, depth coding, solid surface modeling, maximum intensity and other techniques, some of which may be combined. A graphical user interface provides easy and direct control of all input parameters. Motion and stereo are available options. Many three- dimensional imaging devices give recordings where one dimension has different resolution and sampling than the other two which requires interpolation to obtain correct geometry. We have evaluated algorithms with interpolation in object space and in projection space. There are many ways to simplify the geometrical transformations to gain performance. We present results of some ways to simplify the calculations.

  14. Surface mapping of spike potential fields: experienced EEGers vs. computerized analysis.

    PubMed

    Koszer, S; Moshé, S L; Legatt, A D; Shinnar, S; Goldensohn, E S

    1996-03-01

    An EEG epileptiform spike focus recorded with scalp electrodes is clinically localized by visual estimation of the point of maximal voltage and the distribution of its surrounding voltages. We compared such estimated voltage maps, drawn by experienced electroencephalographers (EEGers), with a computerized spline interpolation technique employed in the commercially available software package FOCUS. Twenty-two spikes were recorded from 15 patients during long-term continuous EEG monitoring. Maps of voltage distribution from the 28 electrodes surrounding the points of maximum change in slope (the spike maximum) were constructed by the EEGer. The same points of maximum spike and voltage distributions at the 29 electrodes were mapped by computerized spline interpolation and a comparison between the two methods was made. The findings indicate that the computerized spline mapping techniques employed in FOCUS construct voltage maps with similar maxima and distributions as the maps created by experienced EEGers. The dynamics of spike activity, including correlations, are better visualized using the computerized technique than by manual interpretation alone. Its use as a technique for spike localization is accurate and adds information of potential clinical value.

  15. Assimilation of altimeter data into a quasigeostrophic ocean model using optimal interpolation and eofs

    NASA Astrophysics Data System (ADS)

    Rienecker, M. M.; Adamec, D.

    1995-01-01

    An ensemble of fraternal-twin experiments is used to assess the utility of optimal interpolation and model-based vertical empirical orthogonal functions (eofs) of streamfunction variability to assimilate satellite altimeter data into ocean models. Simulated altimeter data are assimilated into a basin-wide 3-layer quasi-geostrophic model with a horizontal grid spacing of 15 km. The effects of bottom topography are included and the model is forced by a wind stress curl distribution which is constant in time. The simulated data are extracted, along altimeter tracks with spatial and temporal characteristics of Geosat, from a reference model ocean with a slightly different climatology from that generated by the model used for assimilation. The use of vertical eofs determined from the model-generated streamfunction variability is shown to be effective in aiding the model's dynamical extrapolation of the surface information throughout the rest of the water column. After a single repeat cycle (17 days), the analysis errors are reduced markedly from the initial level, by 52% in the surface layer, 41% in the second layer and 11% in the bottom layer. The largest differences between the assimilation analysis and the reference ocean are found in the nonlinear regime of the mid-latitude jet in all layers. After 100 days of assimilation, the error in the upper two layers has been reduced by over 50% and that in the bottom layer by 38%. The essence of the method is that the eofs capture the statistics of the dynamical balances in the model and ensure that this balance is not inappropriately disturbed during the assimilation process. This statistical balance includes any potential vorticity homogeneity which may be associated with the eddy stirring by mid-latitude surface jets.

  16. Modeling the uncertainty of estimating forest carbon stocks in China

    NASA Astrophysics Data System (ADS)

    Yue, T. X.; Wang, Y. F.; Du, Z. P.; Zhao, M. W.; Zhang, L. L.; Zhao, N.; Lu, M.; Larocque, G. R.; Wilson, J. P.

    2015-12-01

    Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks at sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote-sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Lund-Potsdam-Jena dynamic global vegetation model (LPJ), the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations and an assimilation method for incorporating the ground sample plots into LPJ. The validation results indicated that both the data fusion and data assimilation approaches reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high accuracy surface modeling to fuse the ground sample plots with the satellite observations (HASM-SOA). The estimates produced with HASM-SOA were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots, respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-SOA method.

  17. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  18. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  19. Novel Digital Signal Processing and Detection Techniques.

    DTIC Science & Technology

    1980-09-01

    decimation and interpolation [11, 1 2]. * Submitted by: Bede Liu Department of Electrical .l Engineering and Computer Science Princeton University ...on the use of recursive filters for decimation and interpolation. 4- UNCL.ASSIFIED~ SECURITY CLASSIFICATION OF PAGEfW1,en Data Fneprd) ...filter structure for realizing low-pass filter is developed 16,7]. By employing decimation and interpolation, the filter uses only coefficients 0, +1, and

  20. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

Top