NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
Multiprocessor computer overset grid method and apparatus
Barnette, Daniel W.; Ober, Curtis C.
2003-01-01
A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.
Arc Length Based Grid Distribution For Surface and Volume Grids
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1996-01-01
Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.
An objective isobaric/isentropic technique for upper air analysis
NASA Technical Reports Server (NTRS)
Mancuso, R. L.; Endlich, R. M.; Ehernberger, L. J.
1981-01-01
An objective meteorological analysis technique is presented whereby both horizontal and vertical upper air analyses are performed. The process used to interpolate grid-point values from the upper-air station data is the same as for grid points on both an isobaric surface and a vertical cross-sectional plane. The nearby data surrounding each grid point are used in the interpolation by means of an anisotropic weighting scheme, which is described. The interpolation for a grid-point potential temperature is performed isobarically; whereas wind, mixing-ratio, and pressure height values are interpolated from data that lie on the isentropic surface that passes through the grid point. Two versions (A and B) of the technique are evaluated by qualitatively comparing computer analyses with subjective handdrawn analyses. The objective products of version A generally have fair correspondence with the subjective analyses and with the station data, and depicted the structure of the upper fronts, tropopauses, and jet streams fairly well. The version B objective products correspond more closely to the subjective analyses, and show the same strong gradients across the upper front with only minor smoothing.
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
NASA Astrophysics Data System (ADS)
Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao
2018-07-01
This paper presents a quantitative comparison of several widely used interpolation algorithms, i.e., Ordinary Kriging (OrK), Universal Kriging (UnK), planar fit and Inverse Distance Weighting (IDW), based on a grid-based single-shell ionosphere model over China. The experimental data were collected from the Crustal Movement Observation Network of China (CMONOC) and the International GNSS Service (IGS), covering the days of year 60-90 in 2015. The quality of these interpolation algorithms was assessed by cross-validation in terms of both the ionospheric correction performance and Single-Frequency (SF) Precise Point Positioning (PPP) accuracy on an epoch-by-epoch basis. The results indicate that the interpolation models perform better at mid-latitudes than low latitudes. For the China region, the performance of OrK and UnK is relatively better than the planar fit and IDW model for estimating ionospheric delay and positioning. In addition, the computational efficiencies of the IDW and planar fit models are better than those of OrK and UnK.
NASA Astrophysics Data System (ADS)
Hittmeir, Sabine; Philipp, Anne; Seibert, Petra
2017-04-01
In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.
On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids
NASA Astrophysics Data System (ADS)
Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.
2017-03-01
The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.
NASA Astrophysics Data System (ADS)
Jo, A.; Ryu, J.; Chung, H.; Choi, Y.; Jeon, S.
2018-04-01
The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m) by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA). Automatic Weather System (AWS) and Automated Synoptic Observing System (ASOS) data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478) and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM) with 30 m resolution, inverse distance weighting (IDW), co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.
Interactive algebraic grid-generation technique
NASA Technical Reports Server (NTRS)
Smith, R. E.; Wiese, M. R.
1986-01-01
An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.
1991-01-01
In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.
Power transformations improve interpolation of grids for molecular mechanics interaction energies.
Minh, David D L
2018-02-18
A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
A projection method for coupling two-phase VOF and fluid structure interaction simulations
NASA Astrophysics Data System (ADS)
Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro
2018-02-01
The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.
Spatial interpolation of monthly mean air temperature data for Latvia
NASA Astrophysics Data System (ADS)
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1977-01-01
The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
2012-03-17
Texas at Austin, Austin, Texas, USA. g dq ’Departement de Physique and LPO, Universite de Bretagne V _ /" r5r’ Occidental, Brest ...grid points are used in the calculation, so that the grid spacing is 8 times larger than on the original grid. The 3-point stencil differences are sig...that the difference between narrow and wide stencil estimates increases over that found on the original higher resolution grid. Interpolation of the
On the Quality of Velocity Interpolation Schemes for Marker-In-Cell Methods on 3-D Staggered Grids
NASA Astrophysics Data System (ADS)
Kaus, B.; Pusok, A. E.; Popov, A.
2015-12-01
The marker-in-cell method is generally considered to be a flexible and robust method to model advection of heterogenous non-diffusive properties (i.e. rock type or composition) in geodynamic problems or incompressible Stokes problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an immobile, Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without preserving the zero divergence of the velocity field at the interpolated locations (i.e. non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Jenny et al., 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. Solutions to this problem include: using larger mesh resolutions and/or marker densities, or repeatedly controlling the marker distribution (i.e. inject/delete), but which does not have an established physical background. To remedy this at low computational costs, Jenny et al. (2001) and Meyer and Jenny (2004) proposed a simple, conservative velocity interpolation (CVI) scheme for 2-D staggered grid, while Wang et al. (2015) extended the formulation to 3-D finite element methods. Here, we follow up with these studies and report on the quality of velocity interpolation methods for 2-D and 3-D staggered grids. We adapt the formulations from both Jenny et al. (2001) and Wang et al. (2015) for use on 3-D staggered grids, where the velocity components have different node locations as compared to finite element, where they share the same node location. We test the different interpolation schemes (CVI and non-CVI) in combination with different advection schemes (Euler, RK2 and RK4) and with/out marker control on Stokes problems with strong velocity gradients, which are discretized using a finite difference method. We show that a conservative formulation reduces the dispersion or clustering of markers and that the density of markers remains steady over time without the need of additional marker control. Jenny et al. (2001, J Comp Phys, 166, 218-252 Meyer and Jenny (2004), Proc Appl Math Mech, 4, 466-467 Wang et al. (2015), G3, Vol.16 Funding was provided by the ERC Starting Grant #258830.
A Comparative Study of Interferometric Regridding Algorithms
NASA Technical Reports Server (NTRS)
Hensley, Scott; Safaeinili, Ali
1999-01-01
THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
Identification of reliable gridded reference data for statistical downscaling methods in Alberta
NASA Astrophysics Data System (ADS)
Eum, H. I.; Gupta, A.
2017-12-01
Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.
NASA Technical Reports Server (NTRS)
Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.
TIGGERC: Turbomachinery Interactive Grid Generator for 2-D Grid Applications and Users Guide
NASA Technical Reports Server (NTRS)
Miller, David P.
1994-01-01
A two-dimensional multi-block grid generator has been developed for a new design and analysis system for studying multiple blade-row turbomachinery problems. TIGGERC is a mouse driven, interactive grid generation program which can be used to modify boundary coordinates and grid packing and generates surface grids using a hyperbolic tangent or algebraic distribution of grid points on the block boundaries. The interior points of each block grid are distributed using a transfinite interpolation approach. TIGGERC can generate a blocked axisymmetric H-grid, C-grid, I-grid or O-grid for studying turbomachinery flow problems. TIGGERC was developed for operation on Silicon Graphics workstations. Detailed discussion of the grid generation methodology, menu options, operational features and sample grid geometries are presented.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION
NASA Technical Reports Server (NTRS)
Smith, R. E.
1994-01-01
TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.
Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard
2014-06-01
The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.
VizieR Online Data Catalog: 3D correction in 5 photometric systems (Bonifacio+, 2018)
NASA Astrophysics Data System (ADS)
Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kucinskas, A.; Prakapavicius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.
2018-01-01
We have used the CIFIST grid of CO5BOLD models to investigate the effects of granulation on fluxes and colours of stars of spectral type F, G, and K. We publish tables with 3D corrections that can be applied to colours computed from any 1D model atmosphere. For Teff>=5000K, the corrections are smooth enough, as a function of atmospheric parameters, that it is possible to interpolate the corrections between grid points; thus the coarseness of the CIFIST grid should not be a major limitation. However at the cool end there are still far too few models to allow a reliable interpolation. (20 data files).
The Atmospheric Data Acquisition And Interpolation Process For Center-TRACON Automation System
NASA Technical Reports Server (NTRS)
Jardin, M. R.; Erzberger, H.; Denery, Dallas G. (Technical Monitor)
1995-01-01
The Center-TRACON Automation System (CTAS), an advanced new air traffic automation program, requires knowledge of spatial and temporal atmospheric conditions such as the wind speed and direction, the temperature and the pressure in order to accurately predict aircraft trajectories. Real-time atmospheric data is available in a grid format so that CTAS must interpolate between the grid points to estimate the atmospheric parameter values. The atmospheric data grid is generally not in the same coordinate system as that used by CTAS so that coordinate conversions are required. Both the interpolation and coordinate conversion processes can introduce errors into the atmospheric data and reduce interpolation accuracy. More accurate algorithms may be computationally expensive or may require a prohibitively large amount of data storage capacity so that trade-offs must be made between accuracy and the available computational and data storage resources. The atmospheric data acquisition and processing employed by CTAS will be outlined in this report. The effects of atmospheric data processing on CTAS trajectory prediction will also be analyzed, and several examples of the trajectory prediction process will be given.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
Structured background grids for generation of unstructured grids by advancing front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1991-01-01
A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.
MAG3D and its application to internal flowfield analysis
NASA Technical Reports Server (NTRS)
Lee, K. D.; Henderson, T. L.; Choo, Y. K.
1992-01-01
MAG3D (multiblock adaptive grid, 3D) is a 3D solution-adaptive grid generation code which redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. The code is applicable to structured grids with a multiblock topology. It is independent of the original grid generator and the flow solver. The code uses the coordinates of an initial grid and the flow solution interpolated onto the new grid. MAG3D uses a numerical mapping and potential theory to modify the grid distribution based on properties of the flow solution on the initial grid. The adaptation technique is discussed, and the capability of MAG3D is demonstrated with several internal flow examples. Advantages of using solution-adaptive grids are also shown by comparing flow solutions on adaptive grids with those on initial grids.
Personal computer (PC) based image processing applied to fluid mechanics research
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processsed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes commputation.
Personal Computer (PC) based image processing applied to fluid mechanics
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Gridless, pattern-driven point cloud completion and extension
NASA Astrophysics Data System (ADS)
Gravey, Mathieu; Mariethoz, Gregoire
2016-04-01
While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
Elliptic surface grid generation on minimal and parmetrized surfaces
NASA Technical Reports Server (NTRS)
Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.
1995-01-01
An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.
A new solution-adaptive grid generation method for transonic airfoil flow calculations
NASA Technical Reports Server (NTRS)
Nakamura, S.; Holst, T. L.
1981-01-01
The clustering algorithm is controlled by a second-order, ordinary differential equation which uses the airfoil surface density gradient as a forcing function. The solution to this differential equation produces a surface grid distribution which is automatically clustered in regions with large gradients. The interior grid points are established from this surface distribution by using an interpolation scheme which is fast and retains the desirable properties of the original grid generated from the standard elliptic equation approach.
Gridding Cloud and Irradiance to Quantify Variability at the ARM Southern Great Plains Site
NASA Astrophysics Data System (ADS)
Riihimaki, L.; Long, C. N.; Gaustad, K.
2017-12-01
Ground-based radiometers provide the most accurate measurements of surface irradiance. However, geometry differences between surface point measurements and large area climate model grid boxes or satellite-based footprints can cause systematic differences in surface irradiance comparisons. In this work, irradiance measurements from a network of ground stations around Kansas and Oklahoma at the US Department of Energy Atmospheric Radiation Measurement (ARM) Southern Great Plains facility are examined. Upwelling and downwelling broadband shortwave and longwave radiometer measurements are available at each site as well as surface meteorological measurements. In addition to the measured irradiances, clear sky irradiance and cloud fraction estimates are analyzed using well established methods based on empirical fits to measured clear sky irradiances. Measurements are interpolated onto a 0.25 degree latitude and longitude grid using a Gaussian weight scheme in order to provide a more accurate statistical comparison between ground measurements and a larger area such as that used in climate models, plane parallel radiative transfer calculations, and other statistical and climatological research. Validation of the gridded product will be shown, as well as analysis that quantifies the impact of site location, cloud type, and other factors on the resulting surface irradiance estimates. The results of this work are being incorporated into the Surface Cloud Grid operational data product produced by ARM, and will be made publicly available for use by others.
The Interpolation Theory of Radial Basis Functions
NASA Astrophysics Data System (ADS)
Baxter, Brad
2010-06-01
In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.
Application of Lagrangian blending functions for grid generation around airplane geometries
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.
1990-01-01
A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.
Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.
Mei, Gang; Xu, Nengxiong; Xu, Liangliang
2016-01-01
This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.
Applications of Lagrangian blending functions for grid generation around airplane geometries
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.
1990-01-01
A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.
NASA Astrophysics Data System (ADS)
Venable, N. B. H.; Fassnacht, S. R.; Adyabadam, G.
2014-12-01
Precipitation data in semi-arid and mountainous regions is often spatially and temporally sparse, yet it is a key variable needed to drive hydrological models. Gridded precipitation datasets provide a spatially and temporally coherent alternative to the use of point-based station data, but in the case of Mongolia, may not be constructed from all data available from government data sources, or may only be available at coarse resolutions. To examine the uncertainty associated with the use of gridded and/or point precipitation data, monthly water balance models of three river basins across forest steppe (the Khoid Tamir River at Ikhtamir), steppe (the Baidrag River at Bayanburd), and desert steppe (the Tuin River at Bogd) ecozones in the Khangai Mountain Region of Mongolia were compared. The models were forced over a 10-year period from 2001-2010, with gridded temperature and precipitation data at a 0.5 x 0.5 degree resolution. These results were compared to modeling using an interpolated hybrid of the gridded data and additional point data recently gathered from government sources; and with point data from the nearest meteorological station to the streamflow gage of choice. Goodness-of-fit measures including the Nash-Sutcliff Efficiency statistic, the percent bias, and the RMSE-observations standard deviation ratio were used to assess model performance. The results were mixed with smaller differences between the two gridded products as compared to the differences between gridded products and station data. The largest differences in precipitation inputs and modeled runoff amounts occurred between the two gridded datasets and station data in the desert steppe (Tuin), and the smallest differences occurred in the forest steppe (Khoid Tamir) and steppe (Baidrag). Mean differences between water balance model results are generally smaller than mean differences in the initial input data over the period of record. Seasonally, larger differences in gridded versus station-based precipitation products and modeled outputs occur in summer in the desert-steppe, and in spring in the forest steppe. Choice of precipitation data source in terms of gridded or point-based data directly affects model outcomes with greater uncertainty noted on a seasonal basis across ecozones of the Khangai.
Structural-Thermal-Optical Program (STOP)
NASA Technical Reports Server (NTRS)
Lee, H. P.
1972-01-01
A structural thermal optical computer program is developed which uses a finite element approach and applies the Ritz method for solving heat transfer problems. Temperatures are represented at the vertices of each element and the displacements which yield deformations at any point of the heated surface are interpolated through grid points.
A general method for generating bathymetric data for hydrodynamic computer models
Burau, J.R.; Cheng, R.T.
1989-01-01
To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
HOMPRA Europe - A gridded precipitation data set from European homogenized time series
NASA Astrophysics Data System (ADS)
Rustemeier, Elke; Kapala, Alice; Meyer-Christoffer, Anja; Finger, Peter; Schneider, Udo; Venema, Victor; Ziese, Markus; Simmer, Clemens; Becker, Andreas
2017-04-01
Reliable monitoring data are essential for robust analyses of climate variability and, in particular, long-term trends. In this regard, a gridded, homogenized data set of monthly precipitation totals - HOMPRA Europe (HOMogenized PRecipitation Analysis of European in-situ data)- is presented. The data base consists of 5373 homogenized monthly time series, a carefully selected subset held by the Global Precipitation Climatology Centre (GPCC). The chosen series cover the period 1951-2005 and contain less than 10% missing values. Due to the large number of data, an automatic algorithm had to be developed for the homogenization of these precipitation series. In principal, the algorithm is based on three steps: * Selection of overlapping station networks in the same precipitation regime, based on rank correlation and Ward's method of minimal variance. Since the underlying time series should be as homogeneous as possible, the station selection is carried out by deterministic first derivation in order to reduce artificial influences. * The natural variability and trends were temporally removed by means of highly correlated neighboring time series to detect artificial break-points in the annual totals. This ensures that only artificial changes can be detected. The method is based on the algorithm of Caussinus and Mestre (2004). * In the last step, the detected breaks are corrected monthly by means of a multiple linear regression (Mestre, 2003). Due to the automation of the homogenization, the validation of the algorithm is essential. Therefore, the method was tested on artificial data sets. Additionally the sensitivity of the method was tested by varying the neighborhood series. If available in digitized form, the station history was also used to search for systematic errors in the jump detection. Finally, the actual HOMPRA Europe product is produced by interpolation of the homogenized series onto a 1° grid using one of the interpolation schems operationally at GPCC (Becker et al., 2013 and Schamm et al., 2014). Caussinus, H., und O. Mestre, 2004: Detection and correction of artificial shifts in climate series, Journal of the Royal, Statistical Society. Series C (Applied Statistics), 53(3), 405-425. Mestre, O., 2003: Correcting climate series using ANOVA technique, Proceedings of the fourth seminar Willmott, C.; Rowe, C. & Philpot, W., 1985: Small-scale climate maps: A sensitivity analysis of some common assumptions associated with grid-point interpolation and contouring The American Carthographer, 12, 5-16 Becker, A.; Finger, P.; Meyer-Christoffer, A.; Rudolf, B.; Schamm, K.; Schneider, U. & Ziese, M., 2013: A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901-present Earth System Science Data, 5, 71-99 Schamm, K.; Ziese, M.; Becker, A.; Finger, P.; Meyer-Christoffer, A.; Schneider, U.; Schröder, M. & Stender, P., 2014: Global gridded precipitation over land: a description of the new GPCC First Guess Daily product, Earth System Science Data, 6, 49-60
Digital terrain tapes: user guide
,
1980-01-01
DMATC's digital terrain tapes are a by-product of the agency's efforts to streamline the production of raised-relief maps. In the early 1960's DMATC developed the Digital Graphics Recorder (DGR) system that introduced new digitizing techniques and processing methods into the field of three-dimensional mapping. The DGR system consisted of an automatic digitizing table and a computer system that recorded a grid of terrain elevations from traces of the contour lines on standard topographic maps. A sequence of computer accuracy checks was performed and then the elevations of grid points not intersected by contour lines were interpolated. The DGR system produced computer magnetic tapes which controlled the carving of plaster forms used to mold raised-relief maps. It was realized almost immediately that this relatively simple tool for carving plaster molds had enormous potential for storing, manipulating, and selectively displaying (either graphically or numerically) a vast number of terrain elevations. As the demand for the digital terrain tapes increased, DMATC began developing increasingly advanced digitizing systems and now operates the Digital Topographic Data Collection System (DTDCS). With DTDCS, two types of data elevations as contour lines and points, and stream and ridge lines are sorted, matched, and resorted to obtain a grid of elevation values for every 0.01 inch on each map (approximately 200 feet on the ground). Undefined points on the grid are found by either linear or or planar interpolation.
A fast dynamic grid adaption scheme for meteorological flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, B.H.; Trapp, R.J.
1993-10-01
The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less
NASA Astrophysics Data System (ADS)
Zhang, J.; Liu, Q.; Li, X.; Niu, H.; Cai, E.
2015-12-01
In recent years, wireless sensor network (WSN) emerges to collect Earth observation data at relatively low cost and light labor load, while its observations are still point-data. To learn the spatial distribution of a land surface parameter, interpolating the point data is necessary. Taking soil moisture (SM) for example, its spatial distribution is critical information for agriculture management, hydrological and ecological researches. This study developed a method to interpolate the WSN-measured SM to acquire the spatial distribution in a 5km*5km study area, located in the middle reaches of HEIHE River, western China. As SM is related to many factors such as topology, soil type, vegetation and etc., even the WSN observation grid is not dense enough to reflect the SM distribution pattern. Our idea is to revise the traditional Kriging algorithm, introducing spectral variables, i.e., vegetation index (VI) and abledo, from satellite imagery as supplementary information to aid the interpolation. Thus, the new Extended-Kriging algorithm operates on the spatial & spectral combined space. To run the algorithm, first we need to estimate the SM variance function, which is also extended to the combined space. As the number of WSN samples in the study area is not enough to gather robust statistics, we have to assume that the SM variance function is invariant over time. So, the variance function is estimated from a SM map, derived from the airborne CASI/TASI images acquired in July 10, 2012, and then applied to interpolate WSN data in that season. Data analysis indicates that the new algorithm can provide more details to the variation of land SM. Then, the Leave-one-out cross-validation is adopted to estimate the interpolation accuracy. Although a reasonable accuracy can be achieved, the result is not yet satisfactory. Besides improving the algorithm, the uncertainties in WSN measurements may also need to be controlled in our further work.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fazio, A.; Henry, B.; Hood, D.
1966-01-01
Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.
Parallel Grid Manipulations in Earth Science Calculations
NASA Technical Reports Server (NTRS)
Sawyer, W.; Lucchesi, R.; daSilva, A.; Takacs, L. L.
1999-01-01
The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center is moving its data assimilation system to massively parallel computing platforms. This parallel implementation of GEOS DAS will be used in the DAO's normal activities, which include reanalysis of data, and operational support for flight missions. Key components of GEOS DAS, including the gridpoint-based general circulation model and a data analysis system, are currently being parallelized. The parallelization of GEOS DAS is also one of the HPCC Grand Challenge Projects. The GEOS-DAS software employs several distinct grids. Some examples are: an observation grid- an unstructured grid of points at which observed or measured physical quantities from instruments or satellites are associated- a highly-structured latitude-longitude grid of points spanning the earth at given latitude-longitude coordinates at which prognostic quantities are determined, and a computational lat-lon grid in which the pole has been moved to a different location to avoid computational instabilities. Each of these grids has a different structure and number of constituent points. In spite of that, there are numerous interactions between the grids, e.g., values on one grid must be interpolated to another, or, in other cases, grids need to be redistributed on the underlying parallel platform. The DAO has designed a parallel integrated library for grid manipulations (PILGRIM) to support the needed grid interactions with maximum efficiency. It offers a flexible interface to generate new grids, define transformations between grids and apply them. Basic communication is currently MPI, however the interfaces defined here could conceivably be implemented with other message-passing libraries, e.g., Cray SHMEM, or with shared-memory constructs. The library is written in Fortran 90. First performance results indicate that even difficult problems, such as above-mentioned pole rotation- a sparse interpolation with little data locality between the physical lat-lon grid and a pole rotated computational grid- can be solved efficiently and at the GFlop/s rates needed to solve tomorrow's high resolution earth science models. In the subsequent presentation we will discuss the design and implementation of PILGRIM as well as a number of the problems it is required to solve. Some conclusions will be drawn about the potential performance of the overall earth science models on the supercomputer platforms foreseen for these problems.
Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014
NASA Astrophysics Data System (ADS)
Junod, R.; Christy, J. R.
2016-12-01
Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).
Amini, A A; Chen, Y; Curwen, R W; Mani, V; Sun, J
1998-06-01
Magnetic resonance imaging (MRI) is unique in its ability to noninvasively and selectively alter tissue magnetization and create tagged patterns within a deforming body such as the heart muscle. The resulting patterns define a time-varying curvilinear coordinate system on the tissue, which we track with coupled B-snake grids. B-spline bases provide local control of shape, compact representation, and parametric continuity. Efficient spline warps are proposed which warp an area in the plane such that two embedded snake grids obtained from two tagged frames are brought into registration, interpolating a dense displacement vector field. The reconstructed vector field adheres to the known displacement information at the intersections, forces corresponding snakes to be warped into one another, and for all other points in the plane, where no information is available, a C1 continuous vector field is interpolated. The implementation proposed in this paper improves on our previous variational-based implementation and generalizes warp methods to include biologically relevant contiguous open curves, in addition to standard landmark points. The methods are validated with a cardiac motion simulator, in addition to in-vivo tagging data sets.
NASA Astrophysics Data System (ADS)
Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.
2014-12-01
Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus
2007-01-01
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
NASA Astrophysics Data System (ADS)
Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.
2016-07-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective
Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan
2015-01-01
Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
GENIE(++): A Multi-Block Structured Grid System
NASA Technical Reports Server (NTRS)
Williams, Tonya; Nadenthiran, Naren; Thornburg, Hugh; Soni, Bharat K.
1996-01-01
The computer code GENIE++ is a continuously evolving grid system containing a multitude of proven geometry/grid techniques. The generation process in GENIE++ is based on an earlier version. The process uses several techniques either separately or in combination to quickly and economically generate sculptured geometry descriptions and grids for arbitrary geometries. The computational mesh is formed by using an appropriate algebraic method. Grid clustering is accomplished with either exponential or hyperbolic tangent routines which allow the user to specify a desired point distribution. Grid smoothing can be accomplished by using an elliptic solver with proper forcing functions. B-spline and Non-Uniform Rational B-splines (NURBS) algorithms are used for surface definition and redistribution. The built in sculptured geometry definition with desired distribution of points, automatic Bezier curve/surface generation for interior boundaries/surfaces, and surface redistribution is based on NURBS. Weighted Lagrance/Hermite transfinite interpolation methods, interactive geometry/grid manipulation modules, and on-line graphical visualization of the generation process are salient features of this system which result in a significant time savings for a given geometry/grid application.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
NASA Astrophysics Data System (ADS)
Feng, Wenqiang; Guo, Zhenlin; Lowengrub, John S.; Wise, Steven M.
2018-01-01
We present a mass-conservative full approximation storage (FAS) multigrid solver for cell-centered finite difference methods on block-structured, locally cartesian grids. The algorithm is essentially a standard adaptive FAS (AFAS) scheme, but with a simple modification that comes in the form of a mass-conservative correction to the coarse-level force. This correction is facilitated by the creation of a zombie variable, analogous to a ghost variable, but defined on the coarse grid and lying under the fine grid refinement patch. We show that a number of different types of fine-level ghost cell interpolation strategies could be used in our framework, including low-order linear interpolation. In our approach, the smoother, prolongation, and restriction operations need never be aware of the mass conservation conditions at the coarse-fine interface. To maintain global mass conservation, we need only modify the usual FAS algorithm by correcting the coarse-level force function at points adjacent to the coarse-fine interface. We demonstrate through simulations that the solver converges geometrically, at a rate that is h-independent, and we show the generality of the solver, applying it to several nonlinear, time-dependent, and multi-dimensional problems. In several tests, we show that second-order asymptotic (h → 0) convergence is observed for the discretizations, provided that (1) at least linear interpolation of the ghost variables is employed, and (2) the mass conservation corrections are applied to the coarse-level force term.
Ground Magnetic Data for West-Central Colorado
Richard Zehner
2012-03-08
Modeled ground magnetic data was extracted from the Pan American Center for Earth and Environmental Studies database at http://irpsrvgis08.utep.edu/viewers/Flex/GravityMagnetic/GravityMagnetic_CyberShare/ on 2/29/2012. The downloaded text file was then imported into an Excel spreadsheet. This spreadsheet data was converted into an ESRI point shapefile in UTM Zone 13 NAD27 projection, showing location and magnetic field strength in nano-Teslas. This point shapefile was then interpolated to an ESRI grid using an inverse-distance weighting method, using ESRI Spatial Analyst. The grid was used to create a contour map of magnetic field strength.
NASA Technical Reports Server (NTRS)
Bereketab, Semere; Wang, Hong-Wei; Mish, Patrick; Devenport, William J.
2000-01-01
Two grids have been developed for the Virginia Tech 6 ft x 6 ft Stability wind tunnel for the purpose of generating homogeneous isotropic turbulent flows for the study of unsteady airfoil response. The first, a square bi-planar grid with a 12" mesh size and an open area ratio of 69.4%, was mounted in the wind tunnel contraction. The second grid, a metal weave with a 1.2 in. mesh size and an open area ratio of 68.2% was mounted in the tunnel test section. Detailed statistical and spectral measurements of the turbulence generated by the two grids are presented for wind tunnel free stream speeds of 10, 20, 30 and 40 m/s. These measurements show the flows to be closely homogeneous and isotropic. Both grids produce flows with a turbulence intensity of about 4% at the location planned for the airfoil leading edge. Turbulence produced by the large grid has an integral scale of some 3.2 inches here. Turbulence produced by the small grid is an order of magnitude smaller. For wavenumbers below the upper limit of the inertial subrange, the spectra and correlations measured with both grids at all speeds can be represented using the von Karman interpolation formula with a single velocity and length scale. The spectra maybe accurately represented over the entire wavenumber range by a modification of the von Karman interpolation formula that includes the effects of dissipation. These models are most accurate at the higher speeds (30 and 40 m/s).
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John
2002-05-20
We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Lessard, Victor R.
1990-01-01
The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.
NASA Astrophysics Data System (ADS)
Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping
2017-10-01
A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.
NASA Astrophysics Data System (ADS)
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
NASA Astrophysics Data System (ADS)
Do, Seongju; Li, Haojun; Kang, Myungjoo
2017-06-01
In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.
Capacitive touch sensing : signal and image processing algorithms
NASA Astrophysics Data System (ADS)
Baharav, Zachi; Kakarala, Ramakrishna
2011-03-01
Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.
NASA Astrophysics Data System (ADS)
Yuval; Rimon, Y.; Graber, E. R.; Furman, A.
2013-07-01
A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli shoreline.
Flow solution on a dual-block grid around an airplane
NASA Technical Reports Server (NTRS)
Eriksson, Lars-Erik
1987-01-01
The compressible flow around a complex fighter-aircraft configuration (fuselage, cranked delta wing, canard, and inlet) is simulated numerically using a novel grid scheme and a finite-volume Euler solver. The patched dual-block grid is generated by an algebraic procedure based on transfinite interpolation, and the explicit Runge-Kutta time-stepping Euler solver is implemented with a high degree of vectorization on a Cyber 205 processor. Results are presented in extensive graphs and diagrams and characterized in detail. The concentration of grid points near the wing apex in the present scheme is shown to facilitate capture of the vortex generated by the leading edge at high angles of attack and modeling of its interaction with the canard wake.
GENIE - Generation of computational geometry-grids for internal-external flow configurations
NASA Technical Reports Server (NTRS)
Soni, B. K.
1988-01-01
Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.
Mehl, S.; Hill, M.C.
2004-01-01
This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.
Coastal bathymetry data collected in 2011 from the Chandeleur Islands, Louisiana
DeWitt, Nancy T.; Pfeiffer, William R.; Bernier, Julie C.; Buster, Noreen A.; Miselis, Jennifer L.; Flocks, James G.; Reynolds, Billy J.; Wiese, Dana S.; Kelso, Kyle W.
2014-01-01
This report serves as an archive of processed interferometric swath and single-beam bathymetry data. Geographic Iinformation System data products include a 50-meter cell-size interpolated bathymetry grid surface, trackline maps, and point data files. Additional files include error analysis maps, Field Activity Collection System logs, and formal Federal Geographic Data Committee metadata.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
EOS Interpolation and Thermodynamic Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammel, J. Tinka
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
NASA Astrophysics Data System (ADS)
Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao
2014-08-01
In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Austin, Anthony P.; Trefethen, Lloyd N.
The trigonometric interpolants to a periodic function f in equispaced points converge if f is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if f is continuous. What if the points are perturbed? With equispaced grid spacing h, let each point be perturbed by an arbitrary amount <= alpha h, where alpha is an element of[0, 1/2) is a fixed constant. The Kadec 1/4 theorem of sampling theory suggests there may be trouble for alpha >= 1/4. We show that convergence of both the interpolants and the quadrature estimates is guaranteed for all alpha < 1/2 if fmore » is twice continuously differentiable, with the convergence rate depending on the smoothness of f. More precisely, it is enough for f to have 4 alpha derivatives in a certain sense, and we conjecture that 2 alpha derivatives are enough. Connections with the Fejer-Kalmar theorem are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.
1994-05-01
Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less
Development of a cross-section based stream package for MODFLOW
NASA Astrophysics Data System (ADS)
Ou, G.; Chen, X.; Irmak, A.
2012-12-01
Accurate simulation of stream-aquifer interactions for wide rivers using the streamflow routing package in MODFLOW is very challenging. To better represent a wide river spanning over multiple model grid cells, a Cross-Section based streamflow Routing (CSR) package is developed and incorporated into MODFLOW to simulate the interaction between streams and aquifers. In the CSR package, a stream segment is represented as a four-point polygon instead of a polyline which is traditionally used in streamflow routing simulation. Each stream segment is composed of upstream and downstream cross-sections. A cross-section consists of a number of streambed points possessing coordinates, streambed thicknesses and streambed hydraulic conductivities to describe the streambed geometry and hydraulic properties. The left and right end points are used to determine the locations of the stream segments. According to the cross-section geometry and hydraulic properties, CSR calculates the new stream stage at the cross-section using the Brent's method to solve the Manning's Equation. A module is developed to automatically compute the area of the stream segment polygon on each intersected MODFLOW grid cell as the upstream and downstream stages change. The stream stage and streambed hydraulic properties of model grids are interpolated based on the streambed points. Streambed leakage is computed as a function of streambed conductance and difference between the groundwater level and stream stage. The Muskingum-Cunge flow routing scheme with variable parameters is used to simulate the streamflow as the groundwater (discharge or recharge) contributes as lateral flows. An example is used to illustrate the capabilities of the CSR package. The result shows that the CSR is applicable to describing the spatial and temporal variation in the interaction between streams and aquifers. The input data become simple due to that the internal program automatically interpolates the cross-section data to each model grid cell.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
Optimization of pressure probe placement and data analysis of engine-inlet distortion
NASA Astrophysics Data System (ADS)
Walter, S. F.
The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.
An efficient HZETRN (a galactic cosmic ray transport code)
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.
1992-01-01
An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
NASA Astrophysics Data System (ADS)
Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.
2017-12-01
Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.
Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft
NASA Technical Reports Server (NTRS)
Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter
2016-01-01
This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
Progress in Grid Generation: From Chimera to DRAGON Grids
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Kao, Kai-Hsiung
1994-01-01
Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are discretized using the newly proposed flux scheme, AUSM+, which will be briefly described herein. Numerical tests on representative 2D inviscid flows are given for demonstration. Finally, extension to 3D is underway, only paced by the availability of the 3D unstructured grid generator.
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
Integrating bathymetric and topographic data
NASA Astrophysics Data System (ADS)
Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat
2017-11-01
The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.
NASA Astrophysics Data System (ADS)
Re, B.; Dobrzynski, C.; Guardone, A.
2017-07-01
A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.
Climate Signal Detection in Wine Quality Using Gridded vs. Station Data in North-East Hungary
NASA Astrophysics Data System (ADS)
Mika, Janos; Razsi, Andras; Gal, Lajos
2017-04-01
The grapevine is one of the oldest cultivated plants. Today's viticultural regions for quality wine production are located in relatively narrow geographical and therefore climatic niches. Our target area, the Matra Region in NE Hungary is fairly close to the edge of optimal wine production concerning its climate conditions. Fifty year (1961-2010) wine and quality (natural sugar content, in weight % of must) data are analysed and compared to parallel climate variables. Two sets of station-based monthly temperature, sunshine duration and precipitation data, taken from neighbouring stations, Eger-Kőlyuktető (1961-2010) and Kompolt (1976-2006) are used in 132 combinations, together with daily grid-point data provided by the CarpatClim Project (www.carpatclim-eu.org/pages/home). By now it is clear that (1) wine quality, is in significant negative correlation with the annual precipitation and in positive correlation with temperature and sunshine duration. (2) Applying a wide combination of monthly data we obtain even stronger correlations (higher significance according to t-tests) even from the station-based data, but it is difficult to select and optimum model from the many proper combinations differing in performance over the test sample just slightly. (3) The interpolated site-specific areal averages from the grid-point data provide even better results and stronger differences between the best models and the few other candidates. (4) Further improvement of statistical signal detection capacity of the above climate variables by using 5-day averages, point at the strong vulnerability of wine quality on climate anomalies of some key phenological phases of the investigated grapevine-mixes. Enhanced spatial and temporal resolution provides much better fit to the observed wine quality data. The study has been supported by the OTKA-113209 national project.
Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An
2018-05-01
In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.
NASA Astrophysics Data System (ADS)
Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard
2015-04-01
Many problems in geodynamic applications may be described as viscous flow of chemically heterogeneous materials. Examples include subduction of compositionally stratified lithospheric plates, folding of rheologically layered rocks, and thermochemical convection of the Earth's mantle. The associated time scales are significantly shorter than that of chemical diffusion, which justifies the commonly featured phenomena in geodynamic flow models termed contact discontinuities. These are spatially sharp interfaces separating regions of different material properties. Numerical modelling of advection of fields with sharp interfaces is challenging. Typical errors include numerical diffusion, which arises due to the repeated action of numerical interpolation. Mathematically, a material field can be represented by discrete indicator functions, whose values are interpreted as logical statements (e.g. whether or not the location is occupied by a given material). Interpolation of a discrete function boils down to determining where in the intermediate node-positions one material ends, and the other begins. The numerical diffusion error thus manifests itself as an erroneous location of the material-interface. Lagrangian advection-schemes are known to be less prone to numerical diffusion errors, compared to their Eulerian counterparts. The tracer-ratio method, where Lagrangian markers are used to discretize the bulk of materials filling the entire domain, is a popular example of such methods. The Stokes equation in this case is solved on a separate, static grid, and in order to do it - material properties must be interpolated from the markers to the grid. This involves the difficulty related to interpolation of discrete fields. The material distribution, and thus material-properties like viscosity and density, seen by the grid is polluted by the interpolation error, which enters the solution of the momentum equation. Errors due to the uncertainty of interface-location can be avoided when using interface tracking methods for advection. Marker-chain method is one such approach, where rather than discretizing the volume of each material, only their interface is discretized by a connected set of markers. Together with the boundary of the domain, the marker-chain constitutes closed polygon-boundaries which enclose the regions spanned by each material. Communicating material properties to the static grid can be done by determining which polygon each grid-node (or integration point) falls into, eliminating the need for interpolation. In our chosen implementation, an efficient parallelized algorithm for the point-in-polygon location is used, so this part of the code takes up only a small fraction of the CPU-time spent on each time step, and allows for spatial resolution of the compositional field beyond that which is practical with markers-in-bulk methods. An additional advantage of using marker-chains for material advection is that it offers a possibility to use some of its markers, or even edges, to generate a FEM grid. One can tailor a grid for obtaining a Stokes solution with optimal accuracy, while controlling the quality and size of its elements. Where geometry of the interface allows - element-edges may be aligned with it, which is known to significantly improve the quality of Stokes solution, compared to when the interface cuts through the elements (Moresi et al., 1996; Deubelbeiss and Kaus, 2008). In more geometrically complex interface-regions, the grid may simply be refined to reduce the error. As materials get deformed in the course of a simulation, the interface may get stretched and entangled. Addition of new markers along the chain may be required in order to properly resolve the increasingly complicated geometry. Conversely, some markers may be removed from regions where they get clustered. Such resampling of the interface requires additional computational effort (although small compared to other parts of the code), and introduces an error in the interface-location (similar to numerical diffusion). Our implementation of this procedure, which utilizes an auxiliary high-resolution structured grid, allows a high degree of control on the magnitude of this error, although cannot eliminate it completely. We will present our chosen numerical implementation of the markers-in-bulk and markers-in-chain methods outlined above, together with the simulation results of the especially designed benchmarks that demonstrate the relative successes and limitations of these methods.
GMI-IPS: Python Processing Software for Aircraft Campaigns
NASA Technical Reports Server (NTRS)
Damon, M. R.; Strode, S. A.; Steenrod, S. D.; Prather, M. J.
2018-01-01
NASA's Atmospheric Tomography Mission (ATom) seeks to understand the impact of anthropogenic air pollution on gases in the Earth's atmosphere. Four flight campaigns are being deployed on a seasonal basis to establish a continuous global-scale data set intended to improve the representation of chemically reactive gases in global atmospheric chemistry models. The Global Modeling Initiative (GMI), is creating chemical transport simulations on a global scale for each of the ATom flight campaigns. To meet the computational demands required to translate the GMI simulation data to grids associated with the flights from the ATom campaigns, the GMI ICARTT Processing Software (GMI-IPS) has been developed and is providing key functionality for data processing and analysis in this ongoing effort. The GMI-IPS is written in Python and provides computational kernels for data interpolation and visualization tasks on GMI simulation data. A key feature of the GMI-IPS, is its ability to read ICARTT files, a text-based file format for airborne instrument data, and extract the required flight information that defines regional and temporal grid parameters associated with an ATom flight. Perhaps most importantly, the GMI-IPS creates ICARTT files containing GMI simulated data, which are used in collaboration with ATom instrument teams and other modeling groups. The initial main task of the GMI-IPS is to interpolate GMI model data to the finer temporal resolution (1-10 seconds) of a given flight. The model data includes basic fields such as temperature and pressure, but the main focus of this effort is to provide species concentrations of chemical gases for ATom flights. The software, which uses parallel computation techniques for data intensive tasks, linearly interpolates each of the model fields to the time resolution of the flight. The temporally interpolated data is then saved to disk, and is used to create additional derived quantities. In order to translate the GMI model data to the spatial grid of the flight path as defined by the pressure, latitude, and longitude points at each flight time record, a weighted average is then calculated from the nearest neighbors in two dimensions (latitude, longitude). Using SciPya's Regular Grid Interpolator, interpolation functions are generated for the GMI model grid and the calculated weighted averages. The flight path points are then extracted from the ATom ICARTT instrument file, and are sent to the multi-dimensional interpolating functions to generate GMI field quantities along the spatial path of the flight. The interpolated field quantities are then written to a ICARTT data file, which is stored for further manipulation. The GMI-IPS is aware of a generic ATom ICARTT header format, containing basic information for all flight campaigns. The GMI-IPS includes logic to edit metadata for the derived field quantities, as well as modify the generic header data such as processing dates and associated instrument files. The ICARTT interpolated data is then appended to the modified header data, and the ICARTT processing is complete for the given flight and ready for collaboration. The output ICARTT data adheres to the ICARTT file format standards V1.1. The visualization component of the GMI-IPS uses Matplotlib extensively and has several functions ranging in complexity. First, it creates a model background curtain for the flight (time versus model eta levels) with the interpolated flight data superimposed on the curtain. Secondly, it creates a time-series plot of the interpolated flight data. Lastly, the visualization component creates averaged 2D model slices (longitude versus latitude) with overlaid flight track circles at key pressure levels. The GMI-IPS consists of a handful of classes and supporting functionality that have been generalized to be compatible with any ICARTT file that adheres to the base class definition. The base class represents a generic ICARTT entry, only defining a single time entry and 3D spatial positioning parameters. Other classes inherit from this base class; several classes for input ICARTT instrument files, which contain the necessary flight positioning information as a basis for data processing, as well as other classes for output ICARTT files, which contain the interpolated model data. Utility classes provide functionality for routine procedures such as: comparing field names among ICARTT files, reading ICARTT entries from a data file and storing them in data structures, and returning a reduced spatial grid based on a collection of ICARTT entries. Although the GMI-IPS is compatible with GMI model data, it can be adapted with reasonable effort for any simulation that creates Hierarchical Data Format (HDF) files. The same can be said of its adaptability to ICARTT files outside of the context of the ATom mission. The GMI-IPS contains just under 30,000 lines of code, eight classes, and a dozen drivers and utility programs. It is maintained with GIT source code management and has been used to deliver processed GMI model data for the ATom campaigns that have taken place to date.
Voxel inversion of airborne electromagnetic data
NASA Astrophysics Data System (ADS)
Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.
2013-12-01
Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.
NASA Astrophysics Data System (ADS)
Wrona, Elizabeth; Rowlandson, Tracy L.; Nambiar, Manoj; Berg, Aaron A.; Colliander, Andreas; Marsh, Philip
2017-05-01
This study examines the Soil Moisture Active Passive soil moisture product on the Equal Area Scalable Earth-2 (EASE-2) 36 km Global cylindrical and North Polar azimuthal grids relative to two in situ soil moisture monitoring networks that were installed in 2015 and 2016. Results indicate that there is no relationship between the Soil Moisture Active Passive (SMAP) Level-2 passive soil moisture product and the upscaled in situ measurements. Additionally, there is very low correlation between modeled brightness temperature using the Community Microwave Emission Model and the Level-1 C SMAP brightness temperature interpolated to the EASE-2 Global grid; however, there is a much stronger relationship to the brightness temperature measurements interpolated to the North Polar grid, suggesting that the soil moisture product could be improved with interpolation on the North Polar grid.
Compact cell-centered discretization stencils at fine-coarse block structured grid interfaces
NASA Astrophysics Data System (ADS)
Pletzer, Alexander; Jamroz, Ben; Crockett, Robert; Sides, Scott
2014-03-01
Different strategies for coupling fine-coarse grid patches are explored in the context of the adaptive mesh refinement (AMR) method. We show that applying linear interpolation to fill in the fine grid ghost values can produce a finite volume stencil of comparable accuracy to quadratic interpolation provided the cell volumes are adjusted. The volume of fine cells expands whereas the volume of neighboring coarse cells contracts. The amount by which the cells contract/expand depends on whether the interface is a face, an edge, or a corner. It is shown that quadratic or better interpolation is required when the conductivity is spatially varying, anisotropic, the refinement ratio is other than two, or when the fine-coarse interface is concave.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE
NASA Technical Reports Server (NTRS)
Vu, B. T.
1994-01-01
TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.
Visualizing geoelectric - Hydrogeological parameters of Fadak farm at Najaf Ashraf by using 2D spa
NASA Astrophysics Data System (ADS)
Al-Khafaji, Wadhah Mahmood Shakir; Al-Dabbagh, Hayder Abdul Zahra
2016-12-01
A geophysical survey achieved to produce an electrical resistivity grid data of 23 Schlumberger Vertical Electrical Sounding (VES) points distributed across the area of Fadak farm at Najaf Ashraf province in Iraq. The current research deals with the application of six interpolation methods used to delineate subsurface groundwater aquifer properties. One example of such features is the delineation of high and low groundwater hydraulic conductivity (K). Such methods could be useful in predicting high (K) zones and predicting groundwater flowing directions within the studied aquifer. Interpolation methods were helpful in predicting some aquifer hydrogeological and structural characteristics. The results produced some important conclusions for any future groundwater investment.
Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex
2014-08-01
A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli shoreline. The implications for aquifer management are discussed.
Integrating TITAN2D Geophysical Mass Flow Model with GIS
NASA Astrophysics Data System (ADS)
Namikawa, L. M.; Renschler, C.
2005-12-01
TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.
NASA Technical Reports Server (NTRS)
Crook, Andrew J.; Delaney, Robert A.
1991-01-01
A procedure is studied for generating three-dimensional grids for advanced turbofan engine fan section geometries. The procedure constructs a discrete mesh about engine sections containing the fan stage, an arbitrary number of axisymmetric radial flow splitters, a booster stage, and a bifurcated core/bypass flow duct with guide vanes. The mesh is an h-type grid system, the points being distributed with a transfinite interpolation scheme with axial and radial spacing being user specified. Elliptic smoothing of the grid in the meridional plane is a post-process option. The grid generation scheme is consistent with aerodynamic analyses utilizing the average-passage equation system developed by Dr. John Adamczyk of NASA Lewis. This flow solution scheme requires a series of blade specific grids each having a common axisymmetric mesh, but varying in the circumferential direction according to the geometry of the specific blade row.
NASA Technical Reports Server (NTRS)
Parikh, Paresh; Pirzadeh, Shahyar; Loehner, Rainald
1990-01-01
A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes.
Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition
NASA Technical Reports Server (NTRS)
Kenwright, David; Lane, David
1995-01-01
An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.
Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.
The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less
The Canadian Precipitation Analysis (CaPA): Evaluation of the statistical interpolation scheme
NASA Astrophysics Data System (ADS)
Evans, Andrea; Rasmussen, Peter; Fortin, Vincent
2013-04-01
CaPA (Canadian Precipitation Analysis) is a data assimilation system which employs statistical interpolation to combine observed precipitation with gridded precipitation fields produced by Environment Canada's Global Environmental Multiscale (GEM) climate model into a final gridded precipitation analysis. Precipitation is important in many fields and applications, including agricultural water management projects, flood control programs, and hydroelectric power generation planning. Precipitation is a key input to hydrological models, and there is a desire to have access to the best available information about precipitation in time and space. The principal goal of CaPA is to produce this type of information. In order to perform the necessary statistical interpolation, CaPA requires the estimation of a semi-variogram. This semi-variogram is used to describe the spatial correlations between precipitation innovations, defined as the observed precipitation amounts minus the GEM forecasted amounts predicted at the observation locations. Currently, CaPA uses a single isotropic variogram across the entire analysis domain. The present project investigates the implications of this choice by first conducting a basic variographic analysis of precipitation innovation data across the Canadian prairies, with specific interest in identifying and quantifying potential anisotropy within the domain. This focus is further expanded by identifying the effect of storm type on the variogram. The ultimate goal of the variographic analysis is to develop improved semi-variograms for CaPA that better capture the spatial complexities of precipitation over the Canadian prairies. CaPA presently applies a Box-Cox data transformation to both the observations and the GEM data, prior to the calculation of the innovations. The data transformation is necessary to satisfy the normal distribution assumption, but introduces a significant bias. The second part of the investigation aims at devising a bias correction scheme based on a moving-window averaging technique. For both the variogram and bias correction components of this investigation, a series of trial runs are conducted to evaluate the impact of these changes on the resulting CaPA precipitation analyses.
New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda
2014-05-01
The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon
2017-01-01
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.
A finite volume Fokker-Planck collision operator in constants-of-motion coordinates
NASA Astrophysics Data System (ADS)
Xiong, Z.; Xu, X. Q.; Cohen, B. I.; Cohen, R.; Dorr, M. R.; Hittinger, J. A.; Kerbel, G.; Nevins, W. M.; Rognlien, T.
2006-04-01
TEMPEST is a 5D gyrokinetic continuum code for edge plasmas. Constants of motion, namely, the total energy E and the magnetic moment μ, are chosen as coordinate s because of their advantage in minimizing numerical diffusion in advection operato rs. Most existing collision operators are written in other coordinates; using them by interpolating is shown to be less satisfactory in maintaining overall numerical accuracy and conservation. Here we develop a Fokker-Planck collision operator directly in (E,μ) space usin g a finite volume approach. The (E, μ) grid is Cartesian, and the turning point boundary represents a straight line cutting through the grid that separates the ph ysical and non-physical zones. The resulting cut-cells are treated by a cell-mergin g technique to ensure a complete particle conservation. A two dimensional fourth or der reconstruction scheme is devised to achieve good numerical accuracy with modest number of grid points. The new collision operator will be benchmarked by numerical examples.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Numerical simulation of aerothermal loads in hypersonic engine inlets due to shock impingement
NASA Technical Reports Server (NTRS)
Ramakrishnan, R.
1992-01-01
The effect of shock impingement on an axial corner simulating the inlet of a hypersonic vehicle engine is modeled using a finite-difference procedure. A three-dimensional dynamic grid adaptation procedure is utilized to move the grids to regions with strong flow gradients. The adaptation procedure uses a grid relocation stencil that is valid at both the interior and boundary points of the finite-difference grid. A linear combination of spatial derivatives of specific flow variables, calculated with finite-element interpolation functions, are used as adaptation measures. This computational procedure is used to study laminar and turbulent Mach 6 flows in the axial corner. The description of flow physics and qualitative measures of heat transfer distributions on cowl and strut surfaces obtained from the analysis are compared with experimental observations. Conclusions are drawn regarding the capability of the numerical scheme for enhanced modeling of high-speed compressible flows.
NASA Astrophysics Data System (ADS)
Hiebl, Johann; Frei, Christoph
2018-04-01
Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
NASA Astrophysics Data System (ADS)
Lussana, Cristian; Saloranta, Tuomo; Skaugen, Thomas; Magnusson, Jan; Tveito, Ole Einar; Andersen, Jess
2018-02-01
The conventional climate gridded datasets based on observations only are widely used in atmospheric sciences; our focus in this paper is on climate and hydrology. On the Norwegian mainland, seNorge2 provides high-resolution fields of daily total precipitation for applications requiring long-term datasets at regional or national level, where the challenge is to simulate small-scale processes often taking place in complex terrain. The dataset constitutes a valuable meteorological input for snow and hydrological simulations; it is updated daily and presented on a high-resolution grid (1 km of grid spacing). The climate archive goes back to 1957. The spatial interpolation scheme builds upon classical methods, such as optimal interpolation and successive-correction schemes. An original approach based on (spatial) scale-separation concepts has been implemented which uses geographical coordinates and elevation as complementary information in the interpolation. seNorge2 daily precipitation fields represent local precipitation features at spatial scales of a few kilometers, depending on the station network density. In the surroundings of a station or in dense station areas, the predictions are quite accurate even for intense precipitation. For most of the grid points, the performances are comparable to or better than a state-of-the-art pan-European dataset (E-OBS), because of the higher effective resolution of seNorge2. However, in very data-sparse areas, such as in the mountainous region of southern Norway, seNorge2 underestimates precipitation because it does not make use of enough geographical information to compensate for the lack of observations. The evaluation of seNorge2 as the meteorological forcing for the seNorge snow model and the DDD (Distance Distribution Dynamics) rainfall-runoff model shows that both models have been able to make profitable use of seNorge2, partly because of the automatic calibration procedure they incorporate for precipitation. The seNorge2 dataset 1957-2015 is available at https://doi.org/10.5281/zenodo.845733. Daily updates from 2015 onwards are available at http://thredds.met.no/thredds/catalog/metusers/senorge2/seNorge2/provisional_archive/PREC1d/gridded_dataset/catalog.html.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
NASA Astrophysics Data System (ADS)
Oriani, F.; Stisen, S.
2016-12-01
Rainfall amount is one of the most sensitive inputs to distributed hydrological models. Its spatial representation is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the 10-km-grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network in recent years (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. Consequently, the related hydrological model shows a significantly lower prediction power. To give a better estimation of spatial rainfall at the grid points far from ground measurements, we use the direct sampling technique (DS) [1], belonging to the family of multiple-point geostatistics. DS, already applied to rainfall and spatial variable estimation [2, 3], simulates a grid value by sampling a training data set where a similar data neighborhood occurs. In this way, complex statistical relations are preserved by generating similar spatial patterns to the ones found in the training data set. Using the reliable grid product from the period 1996-2006 as training data set, we first test the technique by simulating part of this data set, then we apply the technique to the grid product of the period 2007-2014, and subsequently analyzing the uncertainty propagation to the hydrological model. We show that DS can improve the reliability of the rainfall product by generating more realistic rainfall patterns, with a significant repercussion on the hydrological model. The reduction of rain gauge networks is a global phenomenon which has huge implications for hydrological model performance and the uncertainty assessment of water resources. Therefore, the presented methodology can potentially be used in many regions where historical records can act as training data. [1] G.Mariethoz et al. (2010), Water Resour. Res., 10.1029/2008WR007621.[2] F. Oriani et al. (2014), Hydrol. Earth Syst. Sc., 10.5194/hessd-11-3213-2014. [3] G. Mariethoz et al. (2012), Water Resour. Res., 10.1029/2012WR012115.
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A fast and versatile procedure for algebraically generating boundary conforming computational grids for use with finite-volume Euler flow solvers is presented. A semi-analytic homotopic procedure is used to generate the grids. Grids generated in two-dimensional planes are stacked to produce quasi-three-dimensional grid systems. The body surface and outer boundary are described in terms of surface parameters. An interpolation scheme is used to blend between the body surface and the outer boundary in order to determine the field points. The method, albeit developed for analytically generated body geometries is equally applicable to other classes of geometries. The method can be used for both internal and external flow configurations, the only constraint being that the body geometries be specified in two-dimensional cross-sections stationed along the longitudinal axis of the configuration. Techniques for controlling various grid parameters, e.g., clustering and orthogonality are described. Techniques for treating problems arising in algebraic grid generation for geometries with sharp corners are addressed. A set of representative grid systems generated by this method is included. Results of flow computations using these grids are presented for validation of the effectiveness of the method.
1988-03-01
29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least
Analysis of rainfall distribution in Kelantan river basin, Malaysia
NASA Astrophysics Data System (ADS)
Che Ros, Faizah; Tosaka, Hiroyuki
2018-03-01
Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.
Direct Replacement of Arbitrary Grid-Overlapping by Non-Structured Grid
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing
1994-01-01
A new approach that uses nonstructured mesh to replace the arbitrarily overlapped structured regions of embedded grids is presented. The present methodology uses the Chimera composite overlapping mesh system so that the physical domain of the flowfield is subdivided into regions which can accommodate easily-generated grid for complex configuration. In addition, a Delaunay triangulation technique generates nonstructured triangular mesh which wraps over the interconnecting region of embedded grids. It is designed that the present approach, termed DRAGON grid, has three important advantages: eliminating some difficulties of the Chimera scheme, such as the orphan points and/or bad quality of interpolation stencils; making grid communication in a fully conservative way; and implementation into three dimensions is straightforward. A computer code based on a time accurate, finite volume, high resolution scheme for solving the compressible Navier-Stokes equations has been further developed to include both the Chimera overset grid and the nonstructured mesh schemes. For steady state problems, the local time stepping accelerates convergence based on a Courant - Friedrichs - Leury (CFL) number near the local stability limit. Numerical tests on representative steady and unsteady supersonic inviscid flows with strong shock waves are demonstrated.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, J.
1999-01-01
A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.
NASA Astrophysics Data System (ADS)
van Osnabrugge, B.; Weerts, A. H.; Uijlenhoet, R.
2017-11-01
To enable operational flood forecasting and drought monitoring, reliable and consistent methods for precipitation interpolation are needed. Such methods need to deal with the deficiencies of sparse operational real-time data compared to quality-controlled offline data sources used in historical analyses. In particular, often only a fraction of the measurement network reports in near real-time. For this purpose, we present an interpolation method, generalized REGNIE (genRE), which makes use of climatological monthly background grids derived from existing gridded precipitation climatology data sets. We show how genRE can be used to mimic and extend climatological precipitation data sets in near real-time using (sparse) real-time measurement networks in the Rhine basin upstream of the Netherlands (approximately 160,000 km2). In the process, we create a 1.2 × 1.2 km transnational gridded hourly precipitation data set for the Rhine basin. Precipitation gauge data are collected, spatially interpolated for the period 1996-2015 with genRE and inverse-distance squared weighting (IDW), and then evaluated on the yearly and daily time scale against the HYRAS and EOBS climatological data sets. Hourly fields are compared qualitatively with RADOLAN radar-based precipitation estimates. Two sources of uncertainty are evaluated: station density and the impact of different background grids (HYRAS versus EOBS). The results show that the genRE method successfully mimics climatological precipitation data sets (HYRAS/EOBS) over daily, monthly, and yearly time frames. We conclude that genRE is a good interpolation method of choice for real-time operational use. genRE has the largest added value over IDW for cases with a low real-time station density and a high-resolution background grid.
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Time-stable overset grid method for hyperbolic problems using summation-by-parts operators
NASA Astrophysics Data System (ADS)
Sharan, Nek; Pantano, Carlos; Bodony, Daniel J.
2018-05-01
A provably time-stable method for solving hyperbolic partial differential equations arising in fluid dynamics on overset grids is presented in this paper. The method uses interface treatments based on the simultaneous approximation term (SAT) penalty method and derivative approximations that satisfy the summation-by-parts (SBP) property. Time-stability is proven using energy arguments in a norm that naturally relaxes to the standard diagonal norm when the overlap reduces to a traditional multiblock arrangement. The proposed overset interface closures are time-stable for arbitrary overlap arrangements. The information between grids is transferred using Lagrangian interpolation applied to the incoming characteristics, although other interpolation schemes could also be used. The conservation properties of the method are analyzed. Several one-, two-, and three-dimensional, linear and non-linear numerical examples are presented to confirm the stability and accuracy of the method. A performance comparison between the proposed SAT-based interface treatment and the commonly-used approach of injecting the interpolated data onto each grid is performed to highlight the efficacy of the SAT method.
NASA Astrophysics Data System (ADS)
Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun
2018-05-01
A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.
A Quadtree-gridding LBM with Immersed Boundary for Two-dimension Viscous Flows
NASA Astrophysics Data System (ADS)
Yao, Jieke; Feng, Wenliang; Chen, Bin; Zhou, Wei; Cao, Shikun
2017-07-01
An un-uniform quadtree grids lattice Boltzmann method (LBM) with immersed boundary is presented in this paper. In overlapping for different level grids, temporal and spatial interpolation are necessary to ensure the continuity of physical quantity. In order to take advantage of the equation for temporal and spatial step in the same level grids, equal interval interpolation, which is simple to apply to any refined boundary grids in the LBM, is adopted in temporal and spatial aspects to obtain second-order accuracy. The velocity correction, which can guarantee more preferably no-slip boundary condition than the direct forcing method and the momentum exchange method in the traditional immersed-boundary LBM, is used for solid boundary to make the best of Cartesian grid. In present quadtree-gridding immersed-boundary LBM, large eddy simulation (LES) is adopted to simulate the flows over obstacle in higher Reynolds number (Re). The incompressible viscous flows over circular cylinder are carried out, and a great agreement is obtained.
NASA Astrophysics Data System (ADS)
Rose, K.; Glosser, D.; Bauer, J. R.; Barkhurst, A.
2015-12-01
The products of spatial analyses that leverage the interpolation of sparse, point data to represent continuous phenomena are often presented without clear explanations of the uncertainty associated with the interpolated values. As a result, there is frequently insufficient information provided to effectively support advanced computational analyses and individual research and policy decisions utilizing these results. This highlights the need for a reliable approach capable of quantitatively producing and communicating spatial data analyses and their inherent uncertainties for a broad range of uses. To address this need, we have developed the Variable Grid Method (VGM), and associated Python tool, which is a flexible approach that can be applied to a variety of analyses and use case scenarios where users need a method to effectively study, evaluate, and analyze spatial trends and patterns while communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations, etc. We will present examples of our research utilizing the VGM to quantify key spatial trends and patterns for subsurface data interpolations and their uncertainties and leverage these results to evaluate storage estimates and potential impacts associated with underground injection for CO2 storage and unconventional resource production and development. The insights provided by these examples identify how the VGM can provide critical information about the relationship between uncertainty and spatial data that is necessary to better support their use in advance computation analyses and informing research, management and policy decisions.
Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS
Liu, Jia; May, Morgan; Petri, Andrea; ...
2015-03-04
Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ω m, σ 8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build anmore » emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ω m, σ 8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ω m, σ 8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ 8(Ω m/0.27)0.63 = 0.85 +0.03 -0.03.« less
Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jia; May, Morgan; Petri, Andrea
Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ω m, σ 8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build anmore » emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ω m, σ 8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ω m, σ 8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ 8(Ω m/0.27)0.63 = 0.85 +0.03 -0.03.« less
Development of Three-Dimensional DRAGON Grid Technology
NASA Technical Reports Server (NTRS)
Zheng, Yao; Kiou, Meng-Sing; Civinskas, Kestutis C.
1999-01-01
For a typical three dimensional flow in a practical engineering device, the time spent in grid generation can take 70 percent of the total analysis effort, resulting in a serious bottleneck in the design/analysis cycle. The present research attempts to develop a procedure that can considerably reduce the grid generation effort. The DRAGON grid, as a hybrid grid, is created by means of a Direct Replacement of Arbitrary Grid Overlapping by Nonstructured grid. The DRAGON grid scheme is an adaptation to the Chimera thinking. The Chimera grid is a composite structured grid, composing a set of overlapped structured grids, which are independently generated and body-fitted. The grid is of high quality and amenable for efficient solution schemes. However, the interpolation used in the overlapped region between grids introduces error, especially when a sharp-gradient region is encountered. The DRAGON grid scheme is capable of completely eliminating the interpolation and preserving the conservation property. It maximizes the advantages of the Chimera scheme and adapts the strengths of the unstructured and while at the same time keeping its weaknesses minimal. In the present paper, we describe the progress towards extending the DRAGON grid technology into three dimensions. Essential and programming aspects of the extension, and new challenges for the three-dimensional cases, are addressed.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
C library for topological study of the electronic charge density.
Vega, David; Aray, Yosslen; Rodríguez, Jesús
2012-12-05
The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Parks, P. B.; Ishizaki, Ryuichi
2000-10-01
In order to clarify the structure of the ablation flow, 2D simulation is carried out with a fluid code solving temporal evolution of MHD equations. The code includes electrostatic sheath effect at the cloud interface.(P.B. Parks et al.), Plasma Phys. Contr. Fusion 38, 571 (1996). An Eulerian cylindrical coordinate system (r,z) is used with z in a spherical pellet. The code uses the Cubic-Interpolated Psudoparticle (CIP) method(H. Takewaki and T. Yabe, J. Comput. Phys. 70), 355 (1987). that divides the fluid equations into non-advection and advection phases. The most essential element of the CIP method is in calculation of the advection phase. In this phase, a cubic interpolated spatial profile is shifted in space according to the total derivative equations, similarly to a particle scheme. Since the profile is interpolated by using the value and the spatial derivative value at each grid point, there is no numerical oscillation in space, that often appears in conventional spline interpolation. A free boundary condition is used in the code. The possibility of a stationary shock will also be shown in the presentation because the supersonic ablation flow across the magnetic field is impeded.
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
Recent Developments in Grid Generation and Force Integration Technology for Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; VanDalsem, William R. (Technical Monitor)
1994-01-01
Recent developments in algorithms and software tools for generating overset grids for complex configurations are described. These include the overset surface grid generation code SURGRD and version 2.0 of the hyperbolic volume grid generation code HYPGEN. The SURGRD code is in beta test mode where the new features include the capability to march over a collection of panel networks, a variety of ways to control the side boundaries and the marching step sizes and distance, a more robust projection scheme and an interpolation option. New features in version 2.0 of HYPGEN include a wider range of boundary condition types. The code also allows the user to specify different marching step sizes and distance for each point on the surface grid. A scheme that takes into account of the overlapped zones on the body surface for the purpose of forces and moments computation is also briefly described, The process involves the following two software modules: MIXSUR - a composite grid generation module to produce a collection of quadrilaterals and triangles on which pressure and viscous stresses are to be integrated, and OVERINT - a forces and moments integration module.
Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera
NASA Astrophysics Data System (ADS)
Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.
2017-05-01
A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.
A three-dimensional algebraic grid generation scheme for gas turbine combustors with inclined slots
NASA Technical Reports Server (NTRS)
Yang, S. L.; Cline, M. C.; Chen, R.; Chang, Y. L.
1993-01-01
A 3D algebraic grid generation scheme is presented for generating the grid points inside gas turbine combustors with inclined slots. The scheme is based on the 2D transfinite interpolation method. Since the scheme is a 2D approach, it is very efficient and can easily be extended to gas turbine combustors with either dilution hole or slot configurations. To demonstrate the feasibility and the usefulness of the technique, a numerical study of the quick-quench/lean-combustion (QQ/LC) zones of a staged turbine combustor is given. Preliminary results illustrate some of the major features of the flow and temperature fields in the QQ/LC zones. Formation of co- and counter-rotating bulk flow and shape temperature fields can be observed clearly, and the resulting patterns are consistent with experimental observations typical of the confined slanted jet-in-cross flow. Numerical solutions show the method to be an efficient and reliable tool for generating computational grids for analyzing gas turbine combustors with slanted slots.
A new multigrid formulation for high order finite difference methods on summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Weinerfelt, Per; Nordström, Jan
2018-04-01
Multigrid schemes for high order finite difference methods on summation-by-parts form are studied by comparing the effect of different interpolation operators. By using the standard linear prolongation and restriction operators, the Galerkin condition leads to inaccurate coarse grid discretizations. In this paper, an alternative class of interpolation operators that bypass this issue and preserve the summation-by-parts property on each grid level is considered. Clear improvements of the convergence rate for relevant model problems are achieved.
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2017-02-01
Acquisition of high density point clouds using terrestrial laser scanners (TLSs) has become commonplace in geomorphic science. The derived point clouds are often interpolated onto regular grids and the grids compared to detect change (i.e. erosion and deposition/advancement movements). This procedure is necessary for some applications (e.g. digital terrain analysis), but it inevitably leads to a certain loss of potentially valuable information contained within the point clouds. In the present study, an alternative methodology for geomorphological analysis and feature detection from point clouds is proposed. It rests on the use of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), applied to TLS data for a rock glacier front slope in the Swiss Alps. The proposed methods allowed the detection and isolation of movements directly from point clouds which yield to accuracies in the following computation of volumes that depend only on the actual registered distance between points. We demonstrated that these values are more conservative than volumes computed with the traditional DEM comparison. The results are illustrated for the summer of 2015, a season of enhanced geomorphic activity associated with exceptionally high temperatures.
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
Enhancing GIS Capabilities for High Resolution Earth Science Grids
NASA Astrophysics Data System (ADS)
Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.
2017-12-01
Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.
Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses
NASA Astrophysics Data System (ADS)
Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong
2017-04-01
Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.
Dinehart, R.L.; Burau, J.R.
2005-01-01
A strategy of repeated surveys by acoustic Doppler current profiler (ADCP) was applied in a tidal river to map velocity vectors and suspended-sediment indicators. The Sacramento River at the junction with the Delta Cross Channel at Walnut Grove, California, was surveyed over several tidal cycles in the Fall of 2000 and 2001 with a vessel-mounted ADCP. Velocity profiles were recorded along flow-defining survey paths, with surveys repeated every 27 min through a diurnal tidal cycle. Velocity vectors along each survey path were interpolated to a three-dimensional Cartesian grid that conformed to local bathymetry. A separate array of vectors was interpolated onto a grid from each survey. By displaying interpolated vector grids sequentially with computer animation, flow dynamics of the reach could be studied in three-dimensions as flow responded to the tidal cycle. Velocity streamtraces in the grid showed the upwelling of flow from the bottom of the Sacramento River channel into the Delta Cross Channel. The sequential display of vector grids showed that water in the canal briefly returned into the Sacramento River after peak flood tides, which had not been known previously. In addition to velocity vectors, ADCP data were processed to derive channel bathymetry and a spatial indicator for suspended-sediment concentration. Individual beam distances to bed, recorded by the ADCP, were transformed to yield bathymetry accurate enough to resolve small bedforms within the study reach. While recording velocity, ADCPs also record the intensity of acoustic backscatter from particles suspended in the flow. Sequential surveys of backscatter intensity were interpolated to grids and animated to indicate the spatial movement of suspended sediment through the study reach. Calculation of backscatter flux through cross-sectional grids provided a first step for computation of suspended-sediment discharge, the second step being a calibrated relation between backscatter intensity and sediment concentration. Spatial analyses of ADCP data showed that a strategy of repeated surveys and flow-field interpolation has the potential to simplify computation of flow and sediment discharge through complex waterways. The use of trade, product, industry, or firm names in this report is for descriptive purposes only and does not constitute endorsement of products by the US Government. ?? 2005 Elsevier B.V. All rights reserved.
Use of MAGSAT anomaly data for crustal structure and mineral resources in the US Midcontinent
NASA Technical Reports Server (NTRS)
Carmichael, R. S. (Principal Investigator)
1981-01-01
The analysis and preliminary interpretation of investigator-B MAGSAT data are addressed. The data processing included: (1) removal of spurious data points; (2) statistical smoothing along individual data tracks, to reduce the effect of geomagnetic transient disturbances; (3) comparison of data profiles spatially coincident in track location but acquired at different times; (4) reduction of data by weighted averaging to a grid with 1 deg xl deg latitude/longitude spacing, and with elevations interpolated and weighted to a common datum of 400 km; (5) wavelength filtering; and (6) reduction of the anomaly map to the magnetic pole. Agreement was found between a magnitude data anomaly map and a reduce-to-the-pole map supporting the general assumption that, on a large scale (long wavelength), it is induced crustal magnetization which is responsible for major anamalies. Anomalous features are identified and explanations are suggested with regard to crustal structure, petrologic characteristics, and Curie temperature isotherms.
NASA Astrophysics Data System (ADS)
Lyu, Baolei; Hu, Yongtao; Chang, Howard; Russell, Armistead; Bai, Yuqi
2016-04-01
Reliable and accurate characterizations of ground-level PM2.5 concentrations are essential to understand pollution sources and evaluate human exposures etc. Monitoring network could only provide direct point-level observations at limited locations. At the locations without monitors, there are generally two ways to estimate the pollution levels of PM2.5. One is observations of aerosol properties from the satellite-based remote sensing, such as Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD). The other one is from deterministic atmospheric chemistry models, such as the Community Multi-Scale Air Quality Model (CMAQ). In this study, we used a statistical spatio-temporal downscaler to calibrate the two datasets to monitor observations to derive fine-scale ground-level concentrations of PM2.5 with improved accuracy. We treated both MODIS AOD and CMAQ model predictions as biased proxy estimations of PM2.5 pollution levels. The downscaler proposed a Bayesian framework to model the spatially and temporally varying coefficients of the two types of estimations in the linear regression setting, in order to correct biases. Especially for calibrating MODIS AOD, a city-specific linear model was established to fill the missing AOD values, and a novel interpolation-based variable, i.e. PM2.5 Spatial Interpolator, was introduced to account for the spatial dependence among grid cells. We selected the heavy polluted and populated North China as our study area, in a grid setting of 81×81 12-km cells. For the evaluation of calibration performance for retrieved MODIS AOD, the R2 was 0.61 by the full model with PM2.5 Spatial Interpolator being presented, and was 0.48 with PM2.5 Spatial Interpolator not being presented. The constructed AOD values effectively predicted PM2.5 concentrations under our model structure, with R2=0.78. For the evaluation of calibrated CMAQ predictions, the R2 was 0.51, a little less than that of calibrated AOD. Finally we obtained two sets of calibrated estimations of ground-level PM2.5 concentrations with complete spatial coverage. By comparing the two datasets, we found that the prediction from AOD have a little smoother texture than that from CMAQ. The former also predicted larger heavy pollution area in the southern Hebei province than the latter, but in a small margin. In general, they have pretty similar spatial patterns, indicating the reliability of our data fusion method. In summary, the statistical spatio-temporal downscaler could provide improvements on MODIS AOD and CMAQ's predictions on PM2.5 pollution levels. Future work would focus on fusing three datasets, as aforementioned monitor observations, MODIS AOD and CMAQ predictions, to derive predictions of ground-level PM2.5 pollution levels with even increased accuracy.
An efficient transport solver for tokamak plasmas
Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...
2017-01-03
A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.
Bi-cubic interpolation for shift-free pan-sharpening
NASA Astrophysics Data System (ADS)
Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano
2013-12-01
Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.
Reissner-Mindlin Legendre Spectral Finite Elements with Mixed Reduced Quadrature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, K. D.; Sprague, M. A.
2012-10-01
Legendre spectral finite elements (LSFEs) are examined through numerical experiments for static and dynamic Reissner-Mindlin plate bending and a mixed-quadrature scheme is proposed. LSFEs are high-order Lagrangian-interpolant finite elements with nodes located at the Gauss-Lobatto-Legendre quadrature points. Solutions on unstructured meshes are examined in terms of accuracy as a function of the number of model nodes and total operations. While nodal-quadrature LSFEs have been shown elsewhere to be free of shear locking on structured grids, locking is demonstrated here on unstructured grids. LSFEs with mixed quadrature are, however, locking free and are significantly more accurate than low-order finite-elements for amore » given model size or total computation time.« less
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less
NASA Astrophysics Data System (ADS)
Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.
2017-12-01
An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Andrew W; Leung, Lai R; Sridhar, V
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less
Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo
NASA Astrophysics Data System (ADS)
Kosempa, M.; Chambers, D. P.
2016-02-01
Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?
Interpolated Sounding and Gridded Sounding Value-Added Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toto, T.; Jensen, M.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor)
1992-01-01
A two-dimensional vernier scale is disclosed utilizing a cartesian grid on one plate member with a polar grid on an overlying transparent plate member. The polar grid has multiple concentric circles at a fractional spacing of the spacing of the cartesian grid lines. By locating the center of the polar grid on a location on the cartesian grid, interpolation can be made of both the X and Y fractional relationship to the cartesian grid by noting which circles coincide with a cartesian grid line for the X and Y direction.
Chemistry of Stream Sediments and Surface Waters in New England
Robinson, Gilpin R.; Kapo, Katherine E.; Grossman, Jeffrey N.
2004-01-01
Summary -- This online publication portrays regional data for pH, alkalinity, and specific conductance for stream waters and a multi-element geochemical dataset for stream sediments collected in the New England states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. A series of interpolation grid maps portray the chemistry of the stream waters and sediments in relation to bedrock geology, lithology, drainage basins, and urban areas. A series of box plots portray the statistical variation of the chemical data grouped by lithology and other features.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, James G.
1999-01-01
A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.
Reservoir property grids improve with geostatistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogt, J.
1993-09-01
Visualization software, reservoir simulators and many other E and P software applications need reservoir property grids as input. Using geostatistics, as compared to other gridding methods, to produce these grids leads to the best output from the software programs. For the purpose stated herein, geostatistics is simply two types of gridding methods. Mathematically, these methods are based on minimizing or duplicating certain statistical properties of the input data. One geostatical method, called kriging, is used when the highest possible point-by-point accuracy is desired. The other method, called conditional simulation, is used when one wants statistics and texture of the resultingmore » grid to be the same as for the input data. In the following discussion, each method is explained, compared to other gridding methods, and illustrated through example applications. Proper use of geostatistical data in flow simulations, use of geostatistical data for history matching, and situations where geostatistics has no significant advantage over other methods, also will be covered.« less
Introducing MCgrid 2.0: Projecting cross section calculations on grids
NASA Astrophysics Data System (ADS)
Bothmann, Enrico; Hartland, Nathan; Schumann, Steffen
2015-11-01
MCgrid is a software package that provides access to interpolation tools for Monte Carlo event generator codes, allowing for the fast and flexible variation of scales, coupling parameters and PDFs in cutting edge leading- and next-to-leading-order QCD calculations. We present the upgrade to version 2.0 which has a broader scope of interfaced interpolation tools, now providing access to fastNLO, and features an approximated treatment for the projection of MC@NLO-type calculations onto interpolation grids. MCgrid 2.0 also now supports the extended information provided through the HepMC event record used in the recent SHERPA version 2.2.0. The additional information provided therein allows for the support of multi-jet merged QCD calculations in a future update of MCgrid.
Numerical simulation of supersonic and hypersonic inlet flow fields
NASA Technical Reports Server (NTRS)
Mcrae, D. Scott; Kontinos, Dean A.
1995-01-01
This report summarizes the research performed by North Carolina State University and NASA Ames Research Center under Cooperative Agreement NCA2-719, 'Numerical Simulation of Supersonic and Hypersonic Inlet Flow Fields". Four distinct rotated upwind schemes were developed and investigated to determine accuracy and practicality. The scheme found to have the best combination of attributes, including reduction to grid alignment with no rotation, was the cell centered non-orthogonal (CCNO) scheme. In 2D, the CCNO scheme improved rotation when flux interpolation was extended to second order. In 3D, improvements were less dramatic in all cases, with second order flux interpolation showing the least improvement over grid aligned upwinding. The reduction in improvement is attributed to uncertainty in determining optimum rotation angle and difficulty in performing accurate and efficient interpolation of the angle in 3D. The CCNO rotational technique will prove very useful for increasing accuracy when second order interpolation is not appropriate and will materially improve inlet flow solutions.
Importance of interpolation and coincidence errors in data fusion
NASA Astrophysics Data System (ADS)
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Three dimensional unstructured multigrid for the Euler equations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1991-01-01
The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.
NASA Astrophysics Data System (ADS)
Guo, Tongqing; Chen, Hao; Lu, Zhiliang
2018-05-01
Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
Generation of real-time mode high-resolution water vapor fields from GPS observations
NASA Astrophysics Data System (ADS)
Yu, Chen; Penna, Nigel T.; Li, Zhenhong
2017-02-01
Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.
The FORBIO Climate data set for climate analyses
NASA Astrophysics Data System (ADS)
Delvaux, C.; Journée, M.; Bertrand, C.
2015-06-01
In the framework of the interdisciplinary FORBIO Climate research project, the Royal Meteorological Institute of Belgium is in charge of providing high resolution gridded past climate data (i.e. temperature and precipitation). This climate data set will be linked to the measurements on seedlings, saplings and mature trees to assess the effects of climate variation on tree performance. This paper explains how the gridded daily temperature (minimum and maximum) data set was generated from a consistent station network between 1980 and 2013. After station selection, data quality control procedures were developed and applied to the station records to ensure that only valid measurements will be involved in the gridding process. Thereafter, the set of unevenly distributed validated temperature data was interpolated on a 4 km × 4 km regular grid over Belgium. The performance of different interpolation methods has been assessed. The method of kriging with external drift using correlation between temperature and altitude gave the most relevant results.
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
NASA Astrophysics Data System (ADS)
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-04-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
FANS-3D Users Guide (ESTEP Project ER 201031)
2016-08-01
governing laminar and turbulent flows in body-fitted curvilinear grids. The code employs multi-block overset ( chimera ) grids, including fully matched...governing incompressible flow in body-fitted grids. The code allows for multi-block overset ( chimera ) grids, which can be fully matched, arbitrarily...interested reader may consult the Chimera Overset Structured Mesh-Interpolation Code (COSMIC) Users’ Manual (Chen, 2009). The input file used for
Eulerian-Lagrangian solution of the convection-dispersion equation in natural coordinates
Cheng, Ralph T.; Casulli, Vincenzo; Milford, S. Nevil
1984-01-01
The vast majority of numerical investigations of transport phenomena use an Eulerian formulation for the convenience that the computational grids are fixed in space. An Eulerian-Lagrangian method (ELM) of solution for the convection-dispersion equation is discussed and analyzed. The ELM uses the Lagrangian concept in an Eulerian computational grid system. The values of the dependent variable off the grid are calculated by interpolation. When a linear interpolation is used, the method is a slight improvement over the upwind difference method. At this level of approximation both the ELM and the upwind difference method suffer from large numerical dispersion. However, if second-order Lagrangian polynomials are used in the interpolation, the ELM is proven to be free of artificial numerical dispersion for the convection-dispersion equation. The concept of the ELM is extended for treatment of anisotropic dispersion in natural coordinates. In this approach the anisotropic properties of dispersion can be conveniently related to the properties of the flow field. Several numerical examples are given to further substantiate the results of the present analysis.
Uncertainty in coal property valuation in West Virginia: A case study
Hohn, M.E.; McDowell, R.R.
2001-01-01
Interpolated grids of coal bed thickness are being considered for use in a proposed method for taxation of coal in the state of West Virginia (United States). To assess the origin and magnitude of possible inaccuracies in calculated coal tonnage, we used conditional simulation to generate equiprobable realizations of net coal thickness for two coals on a 7 1/2 min topographic quadrangle, and a third coal in a second quadrangle. Coals differed in average thickness and proportion of original coal that had been removed by erosion; all three coals crop out in the study area. Coal tonnage was calculated for each realization and for each interpolated grid for actual and artificial property parcels, and differences were summarized as graphs of percent difference between tonnage calculated from the grid and average tonnage from simulations. Coal in individual parcels was considered minable for valuation purposes if average thickness in each parcel exceeded 30 inches. Results of this study show that over 75% of the parcels are classified correctly as minable or unminable based on interpolation grids of coal bed thickness. Although between 80 and 90% of the tonnages differ by less than 20% between interpolated values and simulated values, a nonlinear conditional bias might exist in estimation of coal tonnage from interpolated thickness, such that tonnage is underestimated where coal is thin, and overestimated where coal is thick. The largest percent differences occur for parcels that are small in area, although because of the small quantities of coal in question, bias is small on an absolute scale for these parcels. For a given parcel size, maximum apparent overestimation of coal tonnage occurs in parcels with an average coal bed thickness near the minable cutoff of 30 in. Conditional bias in tonnage for parcels having a coal thickness exceeding the cutoff by 10 in. or more is constant for two of the three coals studied, and increases slightly with average thickness for the third coal. ?? 2001 International Association for Mathematical Geology.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations
NASA Technical Reports Server (NTRS)
Moon, Young J.; Liou, Meng-Sing
1989-01-01
Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.
Souza, W.R.
1999-01-01
This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1
Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
Tutorial: Asteroseismic Stellar Modelling with AIMS
NASA Astrophysics Data System (ADS)
Lund, Mikkel N.; Reese, Daniel R.
The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.
The ARM Best Estimate 2-dimensional Gridded Surface
Xie,Shaocheng; Qi, Tang
2015-06-15
The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.
Cryo-EM image alignment based on nonuniform fast Fourier transform.
Yang, Zhengfan; Penczek, Pawel A
2008-08-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.
Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform
Yang, Zhengfan; Penczek, Pawel A.
2008-01-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351
NASA Astrophysics Data System (ADS)
Wang, Tingting; Sun, Fubao; Ge, Quansheng; Kleidon, Axel; Liu, Wenbin
2018-02-01
Although gridded air temperature data sets share much of the same observations, different rates of warming can be detected due to different approaches employed for considering elevation signatures in the interpolation processes. Here we examine the influence of varying spatiotemporal distribution of sites on surface warming in the long-term trend and over the recent warming hiatus period in China during 1951-2015. A suspicious cooling trend in raw interpolated air temperature time series is found in the 1950s, and 91% of which can be explained by the artificial elevation changes introduced by the interpolation process. We define the regression slope relating temperature difference and elevation difference as the bulk lapse rate of -5.6°C/km, which tends to be higher (-8.7°C/km) in dry regions but lower (-2.4°C/km) in wet regions. Compared to independent experimental observations, we find that the estimated monthly bulk lapse rates work well to capture the elevation bias. Significant improvement can be achieved in adjusting the interpolated original temperature time series using the bulk lapse rate. The results highlight that the developed bulk lapse rate is useful to account for the elevation signature in the interpolation of site-based surface air temperature to gridded data sets and is necessary for avoiding elevation bias in climate change studies.
Interpolation of longitudinal shape and image data via optimal mass transport
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen
2014-03-01
Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.
NASA Astrophysics Data System (ADS)
Ouellette, G., Jr.; DeLong, K. L.
2016-02-01
High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
DRAGON Grid: A Three-Dimensional Hybrid Grid Generation Code Developed
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
2000-01-01
Because grid generation can consume 70 percent of the total analysis time for a typical three-dimensional viscous flow simulation for a practical engineering device, payoffs from research and development could reduce costs and increase throughputs considerably. In this study, researchers at the NASA Glenn Research Center at Lewis Field developed a new hybrid grid approach with the advantages of flexibility, high-quality grids suitable for an accurate resolution of viscous regions, and a low memory requirement. These advantages will, in turn, reduce analysis time and increase accuracy. They result from an innovative combination of structured and unstructured grids to represent the geometry and the computation domain. The present approach makes use of the respective strengths of both the structured and unstructured grid methods, while minimizing their weaknesses. First, the Chimera grid generates high-quality, mostly orthogonal meshes around individual components. This process is flexible and can be done easily. Normally, these individual grids are required overlap each other so that the solution on one grid can communicate with another. However, when this communication is carried out via a nonconservative interpolation procedure, a spurious solution can result. Current research is aimed at entirely eliminating this undesired interpolation by directly replacing arbitrary grid overlapping with a nonstructured grid called a DRAGON grid, which uses the same set of conservation laws over the entire region, thus ensuring conservation everywhere. The DRAGON grid is shown for a typical film-cooled turbine vane with 33 holes and 3 plenum compartments. There are structured grids around each geometrical entity and unstructured grids connecting them. In fiscal year 1999, Glenn researchers developed and tested the three-dimensional DRAGON grid-generation tools. A flow solver suitable for the DRAGON grid has been developed, and a series of validation tests are underway.
NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Michaelides, Silas; Lange, Manfred A.
2015-04-01
Space-time variability of precipitation plays a key role as a driver of many processes in different environmental fields like hydrology, ecology, biology, agriculture, and natural hazards. The objective of this study was to compare two approaches for statistical downscaling of precipitation from climate models. The study was applied to the island of Cyprus, an orographically complex terrain. The first approach makes use of a spatial temporal Neyman-Scott Rectangular Pulses (NSRP) model and a previously tested interpolation scheme (Camera et al., 2014). The second approach is based on the use of the single site NSRP model and a simplified gridded scheme based on scaling coefficients obtained from past observations. The rainfall generators were evaluated on the period 1980-2010. Both approaches were subsequently used to downscale three RCMs from the EU ENSEMBLE project to calculate climate projections (2020-2050). The main advantage of the spatial-temporal approach is that it allows creating spatially consistent daily maps of precipitation. On the other hand, due to the assumptions made using a stochastic generator based on homogeneous Poisson processes, it shows a smoothing out of all the rainfall statistics (except mean and variance) all over the study area. This leads to high errors when analyzing indices related to extremes. Examples are the number of days with rainfall over 50 mm (R50 - mean error 65%), the 95th percentile value of rainy days (RT95 - mean error 19%), and the mean annual rainfall recorded on days with rainfall above the 95th percentile (RA95 - mean error 22%). The single site approach excludes the possibility of using the created gridded data sets for case studies involving spatial connection between grid cells (e.g. hydrologic modelling), but it leads to a better reproduction of rainfall statistics and properties. The errors for the extreme indices are in fact much lower: 17% for R50, 4% for RT95, and 2% for RA95. Future projections show a decrease of the mean annual rainfall (for both approaches) over the study area between 70 mm (≈15%) and 5 mm (≈1%), in comparison to the reference period 1980-2010. Regarding extremes, calculated only with the single site approach, the projections show a decrease of the R50 index between 25% and 7%, and of the RT95 between 8% and 0%. Thus, these projections indicate that a slight reduction in the number and intensity of extremes can be expected. Further research will be done to adapt and evaluate the use of a spatial-temporal generator with nonhomogeneous spatial activation of raincells (Burton et al., 2010) to the study area. Burton, A., Fowler, H.J., Kilsby, C.G., O'Connell, P. E., 2010a. A stochastic model for the spatial-temporal simulation of non-homogeneous rainfall occurrence and amounts, Water Resour. Res. 46, W11501. DOI: 10.1029/2009WR008884 Camera, C., Bruggeman, A., Hadjinicolaou, P., Pashiardis, S., Lange, M. A., 2014. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010. J. Geophys. Res. Atmos., 119, 693-712. DOI: 10.1002/2013JD020611.
HELP - A Multimaterial Eulerian Program in Two Space Dimensions and Time
1976-04-01
ASSUMPTIONS 3-1 3.2 STRENGTH PHASE (SPHASE) 3-1 3.2.1 Definition of Strain Rate Derivatives for Cells at a Grid Boundary 3-3 3.2.2 Definition...of Interpolated Strain Rates and Stresses for Cells at a Grid Boundary 3-4 3.2.3 Definition of Velocities and Deviator Stresses at Grid Boundaries...Grid Boundaries 3-9 3.4.2 Change of Momentum for Cells at Reflective Grid Boundaries in TPHASE.. 3-10 3.4.3 Correction to Theoretical Energy for
Evaluation of gridding procedures for air temperature over Southern Africa
NASA Astrophysics Data System (ADS)
Eiselt, Kai-Uwe; Kaspar, Frank; Mölg, Thomas; Krähenmann, Stefan; Posada, Rafael; Riede, Jens O.
2017-06-01
Africa is considered to be highly vulnerable to climate change, yet the availability of observational data and derived products is limited. As one element of the SASSCAL initiative (Southern African Science Service Centre for Climate Change and Adaptive Land Management), a cooperation of Angola, Botswana, Namibia, Zambia, South Africa and Germany, networks of automatic weather stations have been installed or improved (http://www.sasscalweathernet.org). The increased availability of meteorological observations improves the quality of gridded products for the region. Here we compare interpolation methods for monthly minimum and maximum temperatures which were calculated from hourly measurements. Due to a lack of longterm records we focused on data ranging from September 2014 to August 2016. The best interpolation results have been achieved combining multiple linear regression (elevation, a continentality index and latitude as predictors) with three dimensional inverse distance weighted interpolation.
Validation of the H-SAF precipitation product H03 over Greece using rain gauge data
NASA Astrophysics Data System (ADS)
Feidas, H.; Porcu, F.; Puca, S.; Rinollo, A.; Lagouvardos, C.; Kotroni, V.
2018-01-01
This paper presents an extensive validation of the combined infrared/microwave H-SAF (EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management) precipitation product H03, for a 1-year period, using gauge observations from a relatively dense network of 233 stations over Greece. First, the quality of the interpolated data used to validate the precipitation product is assessed and a quality index is constructed based on parameters such as the density of the station network and the orography. Then, a validation analysis is conducted based on comparisons of satellite (H03) with interpolated rain gauge data to produce continuous and multi-categorical statistics at monthly and annual timescales by taking into account the different geophysical characteristics of the terrain (land, coast, sea, elevation). Finally, the impact of the quality of interpolated data on the validation statistics is examined in terms of different configurations of the interpolation model and the rain gauge network characteristics used in the interpolation. The possibility of using a quality index of the interpolated data as a filter in the validation procedure is also investigated. The continuous validation statistics show yearly root mean squared error (RMSE) and mean absolute error (MAE) corresponding to the 225 and 105 % of the mean rain rate, respectively. Mean error (ME) indicates a slight overall tendency for underestimation of the rain gauge rates, which takes large values for the high rain rates. In general, the H03 algorithm cannot retrieve very well the light (< 1 mm/h) and the convective type (>10 mm/h) precipitation. The poor correlation between satellite and gauge data points to algorithm problems in co-locating precipitation patterns. Seasonal comparison shows that retrieval errors are lower for cold months than in the summer months of the year. The multi-categorical statistics indicate that the H03 algorithm is able to discriminate efficiently the rain from the no rain events although a large number of rain events are missed. The most prominent feature is the very high false alarm ratio (FAR) (more than 70 %), the relatively low probability of detection (POD) (less than 40 %), and the overestimation of the rainy pixels. Although the different geophysical features of the terrain (land, coast, sea, elevation) and the quality of the interpolated data have an effect on the validation statistics, this, in general, is not significant and seems to be more distinct in the categorical than in the continuous statistics.
Probabilistic verification of cloud fraction from three different products with CALIPSO
NASA Astrophysics Data System (ADS)
Jung, B. J.; Descombes, G.; Snyder, C.
2017-12-01
In this study, we present how Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) can be used for probabilistic verification of cloud fraction, and apply this probabilistic approach to three cloud fraction products: a) The Air Force Weather (AFW) World Wide Merged Cloud Analysis (WWMCA), b) Satellite Cloud Observations and Radiative Property retrieval Systems (SatCORPS) from NASA Langley Research Center, and c) Multi-sensor Advection Diffusion nowCast (MADCast) from NCAR. Although they differ in their details, both WWMCA and SatCORPS retrieve cloud fraction from satellite observations, mainly of infrared radiances. MADCast utilizes in addition a short-range forecast of cloud fraction (provided by the Model for Prediction Across Scales, assuming cloud fraction is advected as a tracer) and a column-by-column particle filter implemented within the Gridpoint Statistical Interpolation (GSI) data-assimilation system. The probabilistic verification considers the retrieved or analyzed cloud fractions as predicting the probability of cloud at any location within a grid cell and the 5-km vertical feature mask (VFM) from CALIPSO level-2 products as a point observation of cloud.
A 3-D chimera grid embedding technique
NASA Technical Reports Server (NTRS)
Benek, J. A.; Buning, P. G.; Steger, J. L.
1985-01-01
A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Ehrhardt, J; Säring, D; Handels, H
2007-01-01
Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.
Hysteretic behavior using the explicit material point method
NASA Astrophysics Data System (ADS)
Sofianos, Christos D.; Koumousis, Vlasis K.
2018-05-01
The material point method (MPM) is an advancement of particle in cell method, in which Lagrangian bodies are discretized by a number of material points that hold all the properties and the state of the material. All internal variables, stress, strain, velocity, etc., which specify the current state, and are required to advance the solution, are stored in the material points. A background grid is employed to solve the governing equations by interpolating the material point data to the grid. The derived momentum conservation equations are solved at the grid nodes and information is transferred back to the material points and the background grid is reset, ready to handle the next iteration. In this work, the standard explicit MPM is extended to account for smooth elastoplastic material behavior with mixed isotropic and kinematic hardening and stiffness and strength degradation. The strains are decomposed into an elastic and an inelastic part according to the strain decomposition rule. To account for the different phases during elastic loading or unloading and smoothening the transition from the elastic to inelastic regime, two Heaviside-type functions are introduced. These act as switches and incorporate the yield function and the hardening laws to control the whole cyclic behavior. A single expression is thus established for the plastic multiplier for the whole range of stresses. This overpasses the need for a piecewise approach and a demanding bookkeeping mechanism especially when multilinear models are concerned that account for stiffness and strength degradation. The final form of the constitutive stress rate-strain rate relation incorporates the tangent modulus of elasticity, which now includes the Heaviside functions and gathers all the governing behavior, facilitating considerably the simulation of nonlinear response in the MPM framework. Numerical results are presented that validate the proposed formulation in the context of the MPM in comparison with finite element method and experimental results.
Spectral Topography Generation for Arbitrary Grids
NASA Astrophysics Data System (ADS)
Oh, T. J.
2015-12-01
A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).
NASA Astrophysics Data System (ADS)
Troupin, C.; Lenartz, F.; Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Ouberdous, M.; Beckers, J.-M.
2009-04-01
In order to evaluate the variability of the sea surface temperature (SST) in the Western Mediterranean Sea between 1985 and 2005, an integrated approach combining geostatistical tools and modelling techniques has been set up. The objectives are: underline the capability of each tool to capture characteristic phenomena, compare and assess the quality of their outputs, infer an interannual trend from the results. Diva (Data Interpolating Variationnal Analysis, Brasseur et al. (1996) Deep-Sea Res.) was applied on a collection of in situ data gathered from various sources (World Ocean Database 2005, Hydrobase2, Coriolis and MedAtlas2), from which duplicates and suspect values were removed. This provided monthly gridded fields in the region of interest. Heterogeneous time data coverage was taken into account by computing and removing the annual trend, provided by Diva detrending tool. Heterogeneous correlation length was applied through an advection constraint. Statistical technique DINEOF (Data Interpolation with Empirical Orthogonal Functions, Alvera-Azc
Computer programs for thermodynamic and transport properties of hydrogen (tabcode-II)
NASA Technical Reports Server (NTRS)
Roder, H. M.; Mccarty, R. D.; Hall, W. J.
1972-01-01
The thermodynamic and transport properties of para and equilibrium hydrogen have been programmed into a series of computer routines. Input variables are the pair's pressure-temperature and pressure-enthalpy. The programs cover the range from 1 to 5000 psia with temperatures from the triple point to 6000 R or enthalpies from minus 130 BTU/lb to 25,000 BTU/lb. Output variables are enthalpy or temperature, density, entropy, thermal conductivity, viscosity, at constant volume, the heat capacity ratio, and a heat transfer parameter. Property values on the liquid and vapor boundaries are conveniently obtained through two small routines. The programs achieve high speed by using linear interpolation in a grid of precomputed points which define the surface of the property returned.
Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.
2013-01-01
In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.
Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.
2007-01-01
A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.
Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment
NASA Astrophysics Data System (ADS)
Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.
2007-05-01
A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD
NASA Technical Reports Server (NTRS)
Suhs, Norman E.; Rogers, Stuart E.; Dietz, William E.; Kwak, Dochan (Technical Monitor)
2002-01-01
An all new, automated version of the PEGASUS software has been developed and tested. PEGASUS provides the hole-cutting and connectivity information between overlapping grids, and is used as the final part of the grid generation process for overset-grid computational fluid dynamics approaches. The new PEGASUS code (Version 5) has many new features: automated hole cutting; a projection scheme for fixing gaps in overset surfaces; more efficient interpolation search methods using an alternating digital tree; hole-size optimization based on adding additional layers of fringe points; and an automatic restart capability. The new code has also been parallelized using the Message Passing Interface standard. The parallelization performance provides efficient speed-up of the execution time by an order of magnitude, and up to a factor of 30 for very large problems. The results of three example cases are presented: a three-element high-lift airfoil, a generic business jet configuration, and a complete Boeing 777-200 aircraft in a high-lift landing configuration. Comparisons of the computed flow fields for the airfoil and 777 test cases between the old and new versions of the PEGASUS codes show excellent agreement with each other and with experimental results.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
CFD Script for Rapid TPS Damage Assessment
NASA Technical Reports Server (NTRS)
McCloud, Peter
2013-01-01
This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.
Progress Toward Overset-Grid Moving Body Capability for USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Pandyna, Mohagna J.; Frink, Neal T.; Noack, Ralph W.
2005-01-01
A static and dynamic Chimera overset-grid capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. Modifications to the solver primarily consist of a few strategic calls to the Donor interpolation Receptor Transaction library (DiRTlib) to facilitate communication of solution information between various grids. The assembly of multiple overlapping grids into a single-zone composite grid is performed by the Structured, Unstructured and Generalized Grid AssembleR (SUGGAR) code. Several test cases are presented to verify the implementation, assess overset-grid solution accuracy and convergence relative to single-grid solutions, and demonstrate the prescribed relative grid motion capability.
NASA Astrophysics Data System (ADS)
de Laborderie, J.; Duchaine, F.; Gicquel, L.; Vermorel, O.; Wang, G.; Moreau, S.
2018-06-01
Large-Eddy Simulation (LES) is recognized as a promising method for high-fidelity flow predictions in turbomachinery applications. The presented approach consists of the coupling of several instances of the same LES unstructured solver through an overset grid method. A high-order interpolation, implemented within this coupling method, is introduced and evaluated on several test cases. It is shown to be third order accurate, to preserve the accuracy of various second and third order convective schemes and to ensure the continuity of diffusive fluxes and subgrid scale tensors even in detrimental interface configurations. In this analysis, three types of spurious waves generated at the interface are identified. They are significantly reduced by the high-order interpolation at the interface. The latter having the same cost as the original lower order method, the high-order overset grid method appears as a promising alternative to be used in all the applications.
2018-01-01
ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
An approach to quantify the heat wave strength and price a heat derivative for risk hedging
NASA Astrophysics Data System (ADS)
Shen, Samuel S. P.; Kramps, Benedikt; Sun, Shirley X.; Bailey, Barbara
2012-01-01
Mitigating the heat stress via a derivative policy is a vital financial option for agricultural producers and other business sectors to strategically adapt to the climate change scenario. This study has provided an approach to identifying heat stress events and pricing the heat stress weather derivative due to persistent days of high surface air temperature (SAT). Cooling degree days (CDD) are used as the weather index for trade. In this study, a call-option model was used as an example for calculating the price of the index. Two heat stress indices were developed to describe the severity and physical impact of heat waves. The daily Global Historical Climatology Network (GHCN-D) SAT data from 1901 to 2007 from the southern California, USA, were used. A major California heat wave that occurred 20-25 October 1965 was studied. The derivative price was calculated based on the call-option model for both long-term station data and the interpolated grid point data at a regular 0.1°×0.1° latitude-longitude grid. The resulting comparison indicates that (a) the interpolated data can be used as reliable proxy to price the CDD and (b) a normal distribution model cannot always be used to reliably calculate the CDD price. In conclusion, the data, models, and procedures described in this study have potential application in hedging agricultural and other risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less
ACCELERATED FITTING OF STELLAR SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter
2016-07-20
Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less
Daily air temperature interpolated at high spatial resolution over a large mountainous region
Dodson, R.; Marks, D.
1997-01-01
Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.
2015-04-01
Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.
DEM interpolation weight calculation modulus based on maximum entropy
NASA Astrophysics Data System (ADS)
Chen, Tian-wei; Yang, Xia
2015-12-01
There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.
High Order Semi-Lagrangian Advection Scheme
NASA Astrophysics Data System (ADS)
Malaga, Carlos; Mandujano, Francisco; Becerra, Julian
2014-11-01
In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).
Topographic relationships for design rainfalls over Australia
NASA Astrophysics Data System (ADS)
Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.
2016-02-01
Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.
Optimal Interpolation scheme to generate reference crop evapotranspiration
NASA Astrophysics Data System (ADS)
Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco
2018-05-01
We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.
Web-based visualization of gridded dataset usings OceanBrowser
NASA Astrophysics Data System (ADS)
Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie
2015-04-01
OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).
Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.
2012-06-21
We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less
Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
High-resolution daily gridded datasets of air temperature and wind speed for Europe
NASA Astrophysics Data System (ADS)
Brinckmann, S.; Krähenmann, S.; Bissolli, P.
2015-08-01
New high-resolution datasets for near surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are hourly SYNOP observations, partly supplemented by station data from the ECA&D dataset (http://www.ecad.eu). These data are quality tested to eliminate erroneous data and various kinds of inhomogeneities. Grids in a resolution of 0.044° (5 km) are derived by spatial interpolation of these station data into the CORDEX area. For temperature interpolation a modified version of a regression kriging method developed by Krähenmann et al. (2011) is used. At first, predictor fields of altitude, continentality and zonal mean temperature are chosen for a regression applied to monthly station data. The residuals of the monthly regression and the deviations of the daily data from the monthly averages are interpolated using simple kriging in a second and third step. For wind speed a new method based on the concept used for temperature was developed, involving predictor fields of exposure, roughness length, coastal distance and ERA Interim reanalysis wind speed at 850 hPa. Interpolation uncertainty is estimated by means of the kriging variance and regression uncertainties. Furthermore, to assess the quality of the final daily grid data, cross validation is performed. Explained variance ranges from 70 to 90 % for monthly temperature and from 50 to 60 % for monthly wind speed. The resulting RMSE for the final daily grid data amounts to 1-2 °C and 1-1.5 m s-1 (depending on season and parameter) for daily temperature parameters and daily mean wind speed, respectively. The datasets presented in this article are published at http://dx.doi.org/10.5676/DWD_CDC/DECREG0110v1.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation, forcing functions to attract/repel points in an elliptic system, or to trigger local refinement, based upon application of an equidistribution principle. The popularity of solution-adaptive techniques is growing in tandem with unstructured methods. The difficultly of precisely controlling mesh densities and orientations with current unstructured grid generation systems has driven the use of solution-adaptive meshing. Use of derivatives of density or pressure are widely used for construction of such weight functions, and have been proven very successful for inviscid flows with shocks. However, less success has been realized for flowfields with viscous layers, vortices or shocks of disparate strength. It is difficult to maintain the appropriate mesh point spacing in the various regions which require a fine spacing for adequate resolution. Mesh points often migrate from important regions due to refinement of dominant features. An example of this is the well know tendency of adaptive methods to increase the resolution of shocks in the flowfield around airfoils, but in the incorrect location due to inadequate resolution of the stagnation region. This problem has been the motivation for this research.
Poppe, L.J.; Ackerman, S.D.; McMullen, K.Y.; Schattgen, P.T.; Schaer, J.D.; Doran, E.F.
2008-01-01
This report releases echosounder data from the northern part of the National Oceanic and Atmospheric Administration (NOAA) hydrographic survey H11044 in Long Island Sound, off Milford, Connecticut. The data have been interpolated and regridded into a complete-coverage data set and image of the sea floor. The grid produced as a result of the interpolation is at 10-m resolution. These data extend an already published set of reprocessed bathymetric data from the southern part of survey H11044. In Long Island Sound, the U.S. Geological Survey, in cooperation with NOAA and the Connecticut Department of Environmental Protection, is producing detailed maps of the sea floor. Part of the current phase of research involves studies of sea-floor topography and its effect on the distributions of sedimentary environments and benthic habitats. This data set provides a more continuous perspective of the sea floor than was previously available. It helps to define topographic variability and benthic-habitat diversity for the area and improves our understanding of oceanographic processes controlling the distribution of sediments and benthic habitats. Inasmuch as precise information on environmental setting is important for selecting sampling sites and accurately interpreting point measurements, this data set can also serve as a base map for subsequent sedimentological, geochemical, and biological research.
Efficient grid-based techniques for density functional theory
NASA Astrophysics Data System (ADS)
Rodriguez-Hernandez, Juan Ignacio
Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.
Interpolate with DIVA and view the products in OceanBrowser : what's up ?
NASA Astrophysics Data System (ADS)
Watelet, Sylvain; Barth, Alexander; Beckers, Jean-Marie; Troupin, Charles
2017-04-01
The Data-Interpolating Variational Analysis (DIVA) software is a statistical tool designed to reconstruct a continuous field from discrete measurements. This method is based on the numerical implementation of the Variational Inverse Model (VIM), which consists of a minimization of a cost function, allowing the choice of the analyzed field fitting at best the data sets without presenting unrealistic strong variations. The problem is solved efficiently using a finite-element method. This method, equivalent to the Optimal Interpolation, is particularly suited to deal with irregularly-spaced observations and produces outputs on a regular grid (2D, 3D or 4D). The results are stored in NetCDF files, the most widespread format in the earth sciences community. OceanBrowser is a web-service that allows one to visualize gridded fields on-line. Within the SeaDataNet and EMODNET (Chemical lot) projects, several national ocean data centers have created gridded climatologies of different ocean properties using the data analysis software DIVA. In order to give a common viewing service to those interpolated products, the GHER has developed OceanBrowser which is based on open standards from the Open Geospatial Consortium (OGC), in particular Web Map Service (WMS) and Web Feature Service (WFS). These standards define a protocol for describing, requesting and querying two-dimensional maps at a given depth and time. DIVA and OceanBrowser are both softwares tools which are continuously upgraded and distributed for free through frequent version releases. The development is funded by the EMODnet and SeaDataNet projects and include many discussions and feedback from the users community. Here, we present two recent major upgrades. First, we have implemented a "customization" of DIVA analyses following the sea bottom, using the bottom depth gradient as a new source of information. The weaker the slope of the bottom ocean, the higher the correlation length. This correlation length being associated with the propagation of the information, it is therefore harder to interpolate through bottom topographic "barriers" such as the continental slope and easier to do it in the perpendicular direction. Although realistic for most applications, this behaviour can always be disabled by the user. Second, we have added some combined products in OceanBrowser, covering all European seas at once. Based on the analyses performed by the other EMODnet partners using DIVA on five zones (Atlantic, North Sea, Baltic Sea, Black Sea, Mediterranean Sea), we have computed a single European product for five variables : ammonium, chlorophyll-a, dissolved oxygen concentration, phosphate and silicate. At boundaries, a smooth filter was used to remove possible discrepancies between regional analyses. Our European combined product is available for all seasons and several depths. This is the first step towards the use of a common reference field for all European seas when running DIVA.
NASA Astrophysics Data System (ADS)
Koner, Debasish; Barrios, Lizandra; González-Lezana, Tomás; Panda, Aditya N.
2016-01-01
Initial state selected dynamics of the Ne + NeH+(v0 = 0, j0 = 0) → NeH+ + Ne reaction is investigated by quantum and statistical quantum mechanical (SQM) methods on the ground electronic state. The three-body ab initio energies on a set of suitably chosen grid points have been computed at CCSD(T)/aug-cc-PVQZ level and analytically fitted. The fitting of the diatomic potentials, computed at the same level of theory, is performed by spline interpolation. A collinear [NeHNe]+ structure lying 0.72 eV below the Ne + NeH+ asymptote is found to be the most stable geometry for this system. Energies of low lying vibrational states have been computed for this stable complex. Reaction probabilities obtained from quantum calculations exhibit dense oscillatory structures, particularly in the low energy region and these get partially washed out in the integral cross section results. SQM predictions are devoid of oscillatory structures and remain close to 0.5 after the rise at the threshold thus giving a crude average description of the quantum probabilities. Statistical cross sections and rate constants are nevertheless in sufficiently good agreement with the quantum results to suggest an important role of a complex-forming dynamics for the title reaction.
Off disk-center potential field calculations using vector magnetograms
NASA Technical Reports Server (NTRS)
Venkatakrishnan, P.; Gary, G. Allen
1989-01-01
A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.
Validation and Improvement of SRTM Performance over Rugged Terrain
NASA Technical Reports Server (NTRS)
Zebker, Howard A.
2004-01-01
We have previously reported work related to basic technique development in phase unwrapping and generation of digital elevation models (DEM). In the final year of this work we have applied our technique work to the improvement of DEM's produced by SRTM. In particular, we have developed a rigorous mathematical algorithm and means to fill in missing data over rough terrain from other data sets. We illustrate this method by using a higher resolution, but globally less accurate, DEM produced by the TOPSAR airborne instrument over the Galapagos Islands to augment the SRTM data set in this area, We combine this data set with SRTM to use each set to fill in holes left over by the other imaging system. The infilling is done by first interpolating each data set using a prediction error filter that reproduces the same statistical characterization as exhibited by the entire data set within the interpolated region. After this procedure is implemented on each data set, the two are combined on a point by point basis with weights that reflect the accuracy of each data point in its original image. In areas that are better covered by SRTM, TOPSAR data are weighted down but still retain TOPSAR statistics. The reverse is true for regions better covered by TOPSAR. The resulting DEM passes statistical tests and appears quite feasible to the eye, but as this DEM is the best available for the region we cannot fully veri@ its accuracy. Spot checks with GPS points show that locally the technique results in a more comprehensive and accurate map than either data set alone.
A model for the rapid assessment of the impact of aviation noise near airports.
Torija, Antonio J; Self, Rod H; Flindell, Ian H
2017-02-01
This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.
NASA Astrophysics Data System (ADS)
Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc
2018-05-01
The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.
High Maneuverability Airframe: Investigation of Fin and Canard Sizing for Optimum Maneuverability
2014-09-01
overset grids (unified- grid); 5) total variation diminishing discretization based on a new multidimensional interpolation framework; 6) Riemann solvers to...Aerodynamics .........................................................................................3 3.1.1 Solver ...describes the methodology used for the simulations. 3.1.1 Solver The double-precision solver of a commercially available code, CFD ++ v12.1.1, 9
Computing Aerodynamic Performance of a 2D Iced Airfoil: Blocking Topology and Grid Generation
NASA Technical Reports Server (NTRS)
Chi, X.; Zhu, B.; Shih, T. I.-P.; Slater, J. W.; Addy, H. E.; Choo, Yung K.; Lee, Chi-Ming (Technical Monitor)
2002-01-01
The ice accrued on airfoils can have enormously complicated shapes with multiple protruded horns and feathers. In this paper, several blocking topologies are proposed and evaluated on their ability to produce high-quality structured multi-block grid systems. A transition layer grid is introduced to ensure that jaggedness on the ice-surface geometry do not to propagate into the domain. This is important for grid-generation methods based on hyperbolic PDEs (Partial Differential Equations) and algebraic transfinite interpolation. A 'thick' wrap-around grid is introduced to ensure that grid lines clustered next to solid walls do not propagate as streaks of tightly packed grid lines into the interior of the domain along block boundaries. For ice shapes that are not too complicated, a method is presented for generating high-quality single-block grids. To demonstrate the usefulness of the methods developed, grids and CFD solutions were generated for two iced airfoils: the NLF0414 airfoil with and without the 623-ice shape and the B575/767 airfoil with and without the 145m-ice shape. To validate the computations, the computed lift coefficients as a function of angle of attack were compared with available experimental data. The ice shapes and the blocking topologies were prepared by NASA Glenn's SmaggIce software. The grid systems were generated by using a four-boundary method based on Hermite interpolation with controls on clustering, orthogonality next to walls, and C continuity across block boundaries. The flow was modeled by the ensemble-averaged compressible Navier-Stokes equations, closed by the shear-stress transport turbulence model in which the integration is to the wall. All solutions were generated by using the NPARC WIND code.
Performance Trials of an Integrated Loran/GPS/IMU Navigation System, Part 1
2005-01-27
differences are used to correct the grid values in the absence of a local ASF monitor station . Performance of the receiver using different ASF grids...United States is served by the North American Loran-C system made up of 29 stations organized into 10 chains (see Figure 1). Loran coverage is...the absence of a local ASF monitor station . Performance of the receiver using different ASF grids and interpolation techniques and corrected using the
NASA Technical Reports Server (NTRS)
Bao, Han P.
1989-01-01
The CAD/CAM of custom shoes is discussed. The solid object for machining is represented by a wireframe model with its nodes or vertices specified systematically in a grid pattern covering its entire length (point-to-point configuration). Two sets of data from CENCIT and CYBERWARE were used for machining purposes. It was found that the indexing technique (turning the stock by a small angle then moving the tool on a longitudinal path along the foot) yields the best result in terms of ease of programming, savings in wear and tear of the machine and cutting tools, and resolution of fine surface details. The work done using the LASTMOD last design system results in a shoe last specified by a number of congruent surface patches of different sizes. This data format was converted into a form amenable to the machine tool. It involves a series of sorting algorithms and interpolation algorithms to provide the grid pattern that the machine tool needs as was the case in the point to point configuration discussed above. This report also contains an in-depth treatment of the design and production technique of an integrated sole to complement the task of design and manufacture of the shoe last. Clinical data and essential production parameters are discussed. Examples of soles made through this process are given.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
Impact of GPM Rainrate Data Assimilation on Simulation of Hurricane Harvey (2017)
NASA Technical Reports Server (NTRS)
Li, Xuanli; Srikishen, Jayanthi; Zavodsky, Bradley; Mecikalski, John
2018-01-01
Built upon Tropical Rainfall Measuring Mission (TRMM) legacy for next-generation global observation of rain and snow. The GPM was launched in February 2014 with Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) onboard. The GPM has a broad global coverage approximately 70deg S -70deg N with a swath of 245/125-km for the Ka (35.5 GHz)/Ku (13.6 GHz) band radar, and 850-km for the 13-channel GMI. GPM also features better retrievals for heavy, moderate, and light rain and snowfall To develop methodology to assimilate GPM surface precipitation data with Grid-point Statistical Interpolation (GSI) data assimilation system and WRF ARW model To investigate the potential and the value of utilizing GPM observation into NWP for operational environment The GPM rain rate data has been successfully assimilated using the GSI rain data assimilation package. Impacts of rain rate data have been found in temperature and moisture fields of initial conditions. 2.Assimilation of either GPM IMERG or GPROF rain product produces significant improvement in precipitation amount and structure for Hurricane Harvey (2017) forecast. Since IMERG data is available half-hourly, further forecast improvement is expected with continuous assimilation of IMERG data
An Environmental Data Set for Vector-Borne Disease Modeling and Epidemiology
Chabot-Couture, Guillaume; Nigmatulina, Karima; Eckhoff, Philip
2014-01-01
Understanding the environmental conditions of disease transmission is important in the study of vector-borne diseases. Low- and middle-income countries bear a significant portion of the disease burden; but data about weather conditions in those countries can be sparse and difficult to reconstruct. Here, we describe methods to assemble high-resolution gridded time series data sets of air temperature, relative humidity, land temperature, and rainfall for such areas; and we test these methods on the island of Madagascar. Air temperature and relative humidity were constructed using statistical interpolation of weather station measurements; the resulting median 95th percentile absolute errors were 2.75°C and 16.6%. Missing pixels from the MODIS11 remote sensing land temperature product were estimated using Fourier decomposition and time-series analysis; thus providing an alternative to the 8-day and 30-day aggregated products. The RFE 2.0 remote sensing rainfall estimator was characterized by comparing it with multiple interpolated rainfall products, and we observed significant differences in temporal and spatial heterogeneity relevant to vector-borne disease modeling. PMID:24755954
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2017-12-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
NASA Astrophysics Data System (ADS)
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2018-01-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.
Viscous Design of TCA Configuration
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Bauer, Steven X. S.; Campbell, Richard L.
1999-01-01
The goal in this effort is to redesign the baseline TCA configuration for improved performance at both supersonic and transonic cruise. Viscous analyses are conducted with OVERFLOW, a Navier-Stokes code for overset grids, using PEGSUS to compute the interpolations between overset grids. Viscous designs are conducted with OVERDISC, a script which couples OVERFLOW with the Constrained Direct Iterative Surface Curvature (CDISC) inverse design method. The successful execution of any computational fluid dynamics (CFD) based aerodynamic design method for complex configurations requires an efficient method for regenerating the computational grids to account for modifications to the configuration shape. The first section of this presentation deals with the automated regridding procedure used to generate overset grids for the fuselage/wing/diverter/nacelle configurations analysed in this effort. The second section outlines the procedures utilized to conduct OVERDISC inverse designs. The third section briefly covers the work conducted by Dick Campbell, in which a dual-point design at Mach 2.4 and 0.9 was attempted using OVERDISC; the initial configuration from which this design effort was started is an early version of the optimized shape for the TCA configuration developed by the Boeing Commercial Airplane Group (BCAG), which eventually evolved into the NCV design. The final section presents results from application of the Natural Flow Wing design philosophy to the TCA configuration.
Some analysis on the diurnal variation of rainfall over the Atlantic Ocean
NASA Technical Reports Server (NTRS)
Gill, T.; Perng, S.; Hughes, A.
1981-01-01
Data collected from the GARP Atlantic Tropical Experiment (GATE) was examined. The data were collected from 10,000 grid points arranged as a 100 x 100 array; each grid covered a 4 square km area. The amount of rainfall was measured every 15 minutes during the experiment periods using c-band radars. Two types of analyses were performed on the data: analysis of diurnal variation was done on each of grid points based on the rainfall averages at noon and at midnight, and time series analysis on selected grid points based on the hourly averages of rainfall. Since there are no known distribution model which best describes the rainfall amount, nonparametric methods were used to examine the diurnal variation. Kolmogorov-Smirnov test was used to test if the rainfalls at noon and at midnight have the same statistical distribution. Wilcoxon signed-rank test was used to test if the noon rainfall is heavier than, equal to, or lighter than the midnight rainfall. These tests were done on each of the 10,000 grid points at which the data are available.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Murman, S. M.; Berger, M. J.
2003-01-01
This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
The Use of Geostatistics in the Study of Floral Phenology of Vulpia geniculata (L.) Link
León Ruiz, Eduardo J.; García Mozo, Herminia; Domínguez Vilches, Eugenio; Galán, Carmen
2012-01-01
Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS) and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L.) Link throughout the study area during sampling season. Ten sampling points, scattered troughout the city and low mountains in the “Sierra de Córdoba” were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to ellaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps. PMID:22629169
The use of geostatistics in the study of floral phenology of Vulpia geniculata (L.) link.
León Ruiz, Eduardo J; García Mozo, Herminia; Domínguez Vilches, Eugenio; Galán, Carmen
2012-01-01
Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS) and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L.) Link throughout the study area during sampling season. Ten sampling points, scattered throughout the city and low mountains in the "Sierra de Córdoba" were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to elaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps.
Tracking of Ball and Players in Beach Volleyball Videos
Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern
2014-01-01
This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia
2018-02-01
Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media
2010-08-01
applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo
New gridded database of clear-sky solar radiation derived from ground-based observations over Europe
NASA Astrophysics Data System (ADS)
Bartok, Blanka; Wild, Martin; Sanchez-Lorenzo, Arturo; Hakuba, Maria Z.
2017-04-01
Since aerosols modify the entire energy balance of the climate system through different processes, assessments regarding aerosol multiannual variability are highly required by the climate modelling community. Because of the scarcity of long-term direct aerosol measurements, the retrieval of aerosol data/information from other type of observations or satellite measurements are very relevant. One approach frequently used in the literature is analyze of the clear-sky solar radiation which offer a better overview of changes in aerosol content. In the study first two empirical methods are elaborated in order to separate clear-sky situations from observed values of surface solar radiation available at the World Radiation Data Center (WRDC), St. Petersburg. The daily data has been checked for temporal homogeneity by applying the MASH method (Szentimrey, 2003). In the first approach, clear sky situations are detected based on clearness index, namely the ratio of the surface solar radiation to the extraterrestrial solar irradiation. In the second approach the observed values of surface solar radiation are compared to the climatology of clear-sky surface solar radiation calculated by the MAGIC radiation code (Muller et al. 2009). In both approaches the clear-sky radiation values highly depend on the applied thresholds. In order to eliminate this methodological error a verification of clear-sky detection is envisaged through a comparison with the values obtained by a high time resolution clear-sky detection and interpolation algorithm (Long and Ackermann, 2000) making use of the high quality data from the Baseline Surface Radiation Network (BSRN). As the consequences clear-sky data series are obtained for 118 European meteorological stations. Next a first attempt has been done in order to interpolate the point-wise clear-sky radiation data by applying the MISH (Meteorological Interpolation based on Surface Homogenized Data Basis) method for the spatial interpolation of surface meteorological elements developed at the Hungarian Meteorological Service (Szentimrey 2007). In this way new gridded database of clear-sky solar radiation is created suitable for further investigations regarding the role of aerosols in the energy budget, and also for validations of climate model outputs. References 1. Long CN, Ackerman TP. 2000. Identification of clear skies from broadband pyranometer measurements and calculation of downwelling shortwave cloud effects, J. Geophys. Res., 105(D12), 15609-15626, doi:10.1029/2000JD900077. 2. Mueller R, Matsoukas C, Gratzki A, Behr H, Hollmann R. 2009. The CM-SAF operational scheme for the satellite based retrieval of solar surface irradiance - a LUT based eigenvector hybrid approach, Remote Sensing of Environment, 113 (5), 1012-1024, doi:10.1016/j.rse.2009. 01.012 3. Szentimrey T. 2014. Multiple Analysis of Series for Homogenization (MASHv3.03), Hungarian Meteorological Service, https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/ 4. Szentimrey T. Bihari Z. 2014: Meteorological Interpolation based on Surface Homogenized Data Basis (MISHv1.03) https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/
High-resolution daily gridded data sets of air temperature and wind speed for Europe
NASA Astrophysics Data System (ADS)
Brinckmann, Sven; Krähenmann, Stefan; Bissolli, Peter
2016-10-01
New high-resolution data sets for near-surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are SYNOP observations, partly supplemented by station data from the ECA&D data set (http://www.ecad.eu). These data are quality tested to eliminate erroneous data. By spatial interpolation of these station observations, grid data in a resolution of 0.044° (≈ 5
Chang, Howard H.; Hu, Xuefei; Liu, Yang
2014-01-01
There has been a growing interest in the use of satellite-retrieved aerosol optical depth (AOD) to estimate ambient concentrations of PM2.5 (particulate matter <2.5 μm in aerodynamic diameter). With their broad spatial coverage, satellite data can increase the spatial–temporal availability of air quality data beyond ground monitoring measurements and potentially improve exposure assessment for population-based health studies. This paper describes a statistical downscaling approach that brings together (1) recent advances in PM2.5 land use regression models utilizing AOD and (2) statistical data fusion techniques for combining air quality data sets that have different spatial resolutions. Statistical downscaling assumes the associations between AOD and PM2.5 concentrations to be spatially and temporally dependent and offers two key advantages. First, it enables us to use gridded AOD data to predict PM2.5 concentrations at spatial point locations. Second, the unified hierarchical framework provides straightforward uncertainty quantification in the predicted PM2.5 concentrations. The proposed methodology is applied to a data set of daily AOD values in southeastern United States during the period 2003–2005. Via cross-validation experiments, our model had an out-of-sample prediction R2 of 0.78 and a root mean-squared error (RMSE) of 3.61 μg/m3 between observed and predicted daily PM2.5 concentrations. This corresponds to a 10% decrease in RMSE compared with the same land use regression model without AOD as a predictor. Prediction performances of spatial–temporal interpolations to locations and on days without monitoring PM2.5 measurements were also examined. PMID:24368510
Chang, Howard H; Hu, Xuefei; Liu, Yang
2014-07-01
There has been a growing interest in the use of satellite-retrieved aerosol optical depth (AOD) to estimate ambient concentrations of PM2.5 (particulate matter <2.5 μm in aerodynamic diameter). With their broad spatial coverage, satellite data can increase the spatial-temporal availability of air quality data beyond ground monitoring measurements and potentially improve exposure assessment for population-based health studies. This paper describes a statistical downscaling approach that brings together (1) recent advances in PM2.5 land use regression models utilizing AOD and (2) statistical data fusion techniques for combining air quality data sets that have different spatial resolutions. Statistical downscaling assumes the associations between AOD and PM2.5 concentrations to be spatially and temporally dependent and offers two key advantages. First, it enables us to use gridded AOD data to predict PM2.5 concentrations at spatial point locations. Second, the unified hierarchical framework provides straightforward uncertainty quantification in the predicted PM2.5 concentrations. The proposed methodology is applied to a data set of daily AOD values in southeastern United States during the period 2003-2005. Via cross-validation experiments, our model had an out-of-sample prediction R(2) of 0.78 and a root mean-squared error (RMSE) of 3.61 μg/m(3) between observed and predicted daily PM2.5 concentrations. This corresponds to a 10% decrease in RMSE compared with the same land use regression model without AOD as a predictor. Prediction performances of spatial-temporal interpolations to locations and on days without monitoring PM2.5 measurements were also examined.
Pearce, Mark A
2015-08-01
EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.
NASA Astrophysics Data System (ADS)
Li, Na; Tang, Guoqiang; Zhao, Ping; Hong, Yang; Gou, Yabin; Yang, Kai
2017-01-01
This study aims to statistically and hydrologically assess the hydrological utility of the latest Integrated Multi-satellitE Retrievals from Global Precipitation Measurement (IMERG) multi-satellite constellation over the mid-latitude Ganjiang River basin in China. The investigations are conducted at hourly and 0.1° resolutions throughout the rainy season from March 12 to September 30, 2014. Two high-quality quantitative precipitation estimation (QPE) datasets, i.e., a gauge-corrected radar mosaic QPE product (RQPE) and a highly dense network of 1200 rain gauges, are used as the reference. For the implementation of the study, first, we compare IMERG product and RQPE with rain gauge-interpolated data, respectively. The results indicate that both remote sensing products can estimate precipitation fairly well over the basin, while RQPE significantly outperforms IMERG product in almost all the studied cases. The correlation coefficients of RQPE (CC = 0.98 and CC = 0.67) are much higher than those of IMERG product (CC = 0.80 and CC = 0.33) at basin and grid scales, respectively. Then, the hydrological assessment is conducted with the Coupled Routing and Excess Storage (CREST) model under multiple parameterization scenarios, in which the model is calibrated using the rain gauge-interpolated data, RQPE, and IMERG products respectively. During the calibration period (from March 12 to May 31), the simulated streamflow based on rain gauge-interpolated data shows the highest Nash-Sutcliffe coefficient efficiency (NSCE) value (0.92), closely followed by the RQPE (NSCE = 0.84), while IMERG product performs barely acceptable (NSCE = 0.56). During the validation period (from June 1 to September 30), the three rainfall datasets are used to force the CREST model based on all the three calibrated parameter sets (i.e., nine combinations in total). RQPE outperforms rain gauge-interpolated data and IMERG product in all validation scenarios, possibly due to its advantageous capability in capturing high space-time variability of precipitation systems in the humid climate during the validation period. Overall, RQPE and rain gauge-interpolated data exhibit better performance compared with the newly available IMERG product, and RQPE is better than rain gauge-interpolated data to some extent due to the combination of both radar and rain gauge observations. IMERG-forced hourly CREST hydrologic model based on the Gauge- and RQPE-calibrated parameters performs well over Ganjiang River basin. Future studies should promote the hydrological application of RQPE datasets at global and local scales, and continuously improve IMERG algorithms.
Assessment of WENO-extended two-fluid modelling in compressible multiphase flows
NASA Astrophysics Data System (ADS)
Kitamura, Keiichi; Nonomura, Taku
2017-03-01
The two-fluid modelling based on an advection-upwind-splitting-method (AUSM)-family numerical flux function, AUSM+-up, following the work by Chang and Liou [Journal of Computational Physics 2007;225: 840-873], has been successfully extended to the fifth order by weighted-essentially-non-oscillatory (WENO) schemes. Then its performance is surveyed in several numerical tests. The results showed a desired performance in one-dimensional benchmark test problems: Without relying upon an anti-diffusion device, the higher-order two-fluid method captures the phase interface within a fewer grid points than the conventional second-order method, as well as a rarefaction wave and a very weak shock. At a high pressure ratio (e.g. 1,000), the interpolated variables appeared to affect the performance: the conservative-variable-based characteristic-wise WENO interpolation showed less sharper but more robust representations of the shocks and expansions than the primitive-variable-based counterpart did. In two-dimensional shock/droplet test case, however, only the primitive-variable-based WENO with a huge void fraction realised a stable computation.
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
NASA Astrophysics Data System (ADS)
Strong, Courtenay; Khatri, Krishna B.; Kochanski, Adam K.; Lewis, Clayton S.; Allen, L. Niel
2017-05-01
The main objective of this study was to investigate whether dynamically downscaled high resolution (4-km) climate data from the Weather Research and Forecasting (WRF) model provide physically meaningful additional information for reference evapotranspiration (E) calculation compared to the recently published GridET framework that uses interpolation from coarser-scale simulations run at 32-km resolution. The analysis focuses on complex terrain of Utah in the western United States for years 1985-2010, and comparisons were made statewide with supplemental analyses specifically for regions with irrigated agriculture. E was calculated using the standardized equation and procedures proposed by the American Society of Civil Engineers from hourly data, and climate inputs from WRF and GridET were debiased relative to the same set of observations. For annual mean values, E from WRF (EW) and E from GridET (EG) both agreed well with E derived from observations (r2 = 0.95, bias < 2 mm). Domain-wide, EW and EG were well correlated spatially (r2 = 0.89), however local differences ΔE =EW -EG were as large as +439 mm year-1 (+26%) in some locations, and ΔE averaged +36 mm year-1. After linearly removing the effects of contrasts in solar radiation and wind speed, which are characteristically less reliable under downscaling in complex terrain, approximately half the residual variance was accounted for by contrasts in temperature and humidity between GridET and WRF. These contrasts stemmed from GridET interpolating using an assumed lapse rate of Γ = 6.5 K km-1, whereas WRF produced a thermodynamically-driven lapse rate closer to 5 K km-1 as observed in mountainous terrain. The primary conclusions are that observed lapse rates in complex terrain differ markedly from the commonly assumed Γ = 6.5 K km-1, these lapse rates can be realistically resolved via dynamical downscaling, and use of constant Γ produces differences in E of order as large as 102 mm year-1.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
2000-01-01
Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.
Calculations of separated 3-D flows with a pressure-staggered Navier-Stokes equations solver
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
A Navier-Stokes equations solver based on a pressure correction method with a pressure-staggered mesh and calculations of separated three-dimensional flows are presented. It is shown that the velocity pressure decoupling, which occurs when various pressure correction algorithms are used for pressure-staggered meshes, is caused by the ill-conditioned discrete pressure correction equation. The use of a partial differential equation for the incremental pressure eliminates the velocity pressure decoupling mechanism by itself and yields accurate numerical results. Example flows considered are a three-dimensional lid driven cavity flow and a laminar flow through a 90 degree bend square duct. For the lid driven cavity flow, the present numerical results compare more favorably with the measured data than those obtained using a formally third order accurate quadratic upwind interpolation scheme. For the curved duct flow, the present numerical method yields a grid independent solution with a very small number of grid points. The calculated velocity profiles are in good agreement with the measured data.
Teresa E. Jordan
2015-11-15
This collection of files are part of a larger dataset uploaded in support of Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (GPFA-AB, DOE Project DE-EE0006726). Phase 1 of the GPFA-AB project identified potential Geothermal Play Fairways within the Appalachian basin of Pennsylvania, West Virginia and New York. This was accomplished through analysis of 4 key criteria or ‘risks’: thermal quality, natural reservoir productivity, risk of seismicity, and heat utilization. Each of these analyses represent a distinct project task, with the fifth task encompassing combination of the 4 risks factors. Supporting data for all five tasks has been uploaded into the Geothermal Data Repository node of the National Geothermal Data System (NGDS). This submission comprises the data for Thermal Quality Analysis (project task 1) and includes all of the necessary shapefiles, rasters, datasets, code, and references to code repositories that were used to create the thermal resource and risk factor maps as part of the GPFA-AB project. The identified Geothermal Play Fairways are also provided with the larger dataset. Figures (.png) are provided as examples of the shapefiles and rasters. The regional standardized 1 square km grid used in the project is also provided as points (cell centers), polygons, and as a raster. Two ArcGIS toolboxes are available: 1) RegionalGridModels.tbx for creating resource and risk factor maps on the standardized grid, and 2) ThermalRiskFactorModels.tbx for use in making the thermal resource maps and cross sections. These toolboxes contain “item description” documentation for each model within the toolbox, and for the toolbox itself. This submission also contains three R scripts: 1) AddNewSeisFields.R to add seismic risk data to attribute tables of seismic risk, 2) StratifiedKrigingInterpolation.R for the interpolations used in the thermal resource analysis, and 3) LeaveOneOutCrossValidation.R for the cross validations used in the thermal interpolations. Some file descriptions make reference to various 'memos'. These are contained within the final report submitted October 16, 2015. Each zipped file in the submission contains an 'about' document describing the full Thermal Quality Analysis content available, along with key sources, authors, citation, use guidelines, and assumptions, with the specific file(s) contained within the .zip file highlighted.
NASA Astrophysics Data System (ADS)
Oriani, F.; Stisen, S.; Demirel, C.
2017-12-01
The spatial representation of rainfall is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the daily grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. The research is focused on the Skjern catchment (1,050 km2 western Jutland), where we can dispose of the complete rain-gauge database from the Danish Hydrological Observatory and compute the distributed hydrological response at the 1-km scale.To give a better estimation of the gridded rainfall input, we start from ground measurements by simulating the missing data with a stochastic data-mining approach, then we compute again the grid interpolation. To maximize the predictive power of the technique, combinations of station time-series that are the most informative to each other are selected on the basis of their correlation and available historical data. Then, the missing data inside these time-series are simulated together using the direct sampling technique (DS) [1, 2]. DS simulates a datum by sampling the historical record of the same stations where a similar data pattern occurs, preserving their complex statistical relation. The simulated data are reinjected in the whole dataset and used as well as conditioning data to progressively fill up the gaps in other stations.The results show that the proposed methodology, tested on the period 1995-2012, can increase the realism of the grid rainfall product by regenerating the missing ground measurements. The hydrological response is analyzed considering the observations at 5 hydrological stations. The presented methodology can be used in many regions to regenerate the missing data using the information contained in the historical record and propagate the uncertainty of the prediction to the hydrological response. [1] G.Mariethoz et al. (2010), Water Resour. Res., 10.1029/2008WR007621.[2] F. Oriani et al. (2014), Hydrol. Earth Syst. Sc., 10.5194/hessd-11-3213-2014.
NASA Technical Reports Server (NTRS)
Chan, J. S.; Freeman, J. A.
1984-01-01
The viscous, axisymmetric flow in the thrust chamber of the space shuttle main engine (SSME) was computed on the CRAY 205 computer using the general interpolants method (GIM) code. Results show that the Navier-Stokes codes can be used for these flows to study trends and viscous effects as well as determine flow patterns; but further research and development is needed before they can be used as production tools for nozzle performance calculations. The GIM formulation, numerical scheme, and computer code are described. The actual SSME nozzle computation showing grid points, flow contours, and flow parameter plots is discussed. The computer system and run times/costs are detailed.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Regionalisation of statistical model outputs creating gridded data sets for Germany
NASA Astrophysics Data System (ADS)
Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas
2016-04-01
The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation methods for precipitation (REGNIE method based on a combination of multiple linear regression with 5 predictors and inverse distance weighting), air temperature and relative humidity (optimal interpolation) will be given. Finally, results of the regionalisation of WETTREG model output will be shown.
Muko, Soyoka; Shimatani, Ichiro K; Nozawa, Yoko
2014-07-01
Spatial distributions of individuals are conventionally analysed by representing objects as dimensionless points, in which spatial statistics are based on centre-to-centre distances. However, if organisms expand without overlapping and show size variations, such as is the case for encrusting corals, interobject spacing is crucial for spatial associations where interactions occur. We introduced new pairwise statistics using minimum distances between objects and demonstrated their utility when examining encrusting coral community data. We also calculated the conventional point process statistics and the grid-based statistics to clarify the advantages and limitations of each spatial statistical method. For simplicity, coral colonies were approximated by disks in these demonstrations. Focusing on short-distance effects, the use of minimum distances revealed that almost all coral genera were aggregated at a scale of 1-25 cm. However, when fragmented colonies (ramets) were treated as a genet, a genet-level analysis indicated weak or no aggregation, suggesting that most corals were randomly distributed and that fragmentation was the primary cause of colony aggregations. In contrast, point process statistics showed larger aggregation scales, presumably because centre-to-centre distances included both intercolony spacing and colony sizes (radius). The grid-based statistics were able to quantify the patch (aggregation) scale of colonies, but the scale was strongly affected by the colony size. Our approach quantitatively showed repulsive effects between an aggressive genus and a competitively weak genus, while the grid-based statistics (covariance function) also showed repulsion although the spatial scale indicated from the statistics was not directly interpretable in terms of ecological meaning. The use of minimum distances together with previously proposed spatial statistics helped us to extend our understanding of the spatial patterns of nonoverlapping objects that vary in size and the associated specific scales. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
Interpolating of climate data using R
NASA Astrophysics Data System (ADS)
Reinhardt, Katja
2017-04-01
Interpolation methods are used in many different geoscientific areas, such as soil physics, climatology and meteorology. Thereby, unknown values are calculated by using statistical calculation approaches applied on known values. So far, the majority of climatologists have been using computer languages, such as FORTRAN or C++, but there is also an increasing number of climate scientists using R for data processing and visualization. Most of them, however, are still working with arrays and vector based data which is often associated with complex R code structures. For the presented study, I have decided to convert the climate data into geodata and to perform the whole data processing using the raster package, gstat and similar packages, providing a much more comfortable way for data handling. A central goal of my approach is to create an easy to use, powerful and fast R script, implementing the entire geodata processing and visualization into a single and fully automated R based procedure, which allows avoiding the necessity of using other software packages, such as ArcGIS or QGIS. Thus, large amount of data with recurrent process sequences can be processed. The aim of the presented study, which is located in western Central Asia, is to interpolate wind data based on the European reanalysis data Era-Interim, which are available as raster data with a resolution of 0.75˚ x 0.75˚ , to a finer grid. Therefore, various interpolation methods are used: inverse distance weighting, the geostatistical methods ordinary kriging and regression kriging, generalized additve model and the machine learning algorithms support vector machine and neural networks. Besides the first two mentioned methods, the methods are used with influencing factors, e.g. geopotential and topography.
Voxel inversion of airborne electromagnetic data for improved model integration
NASA Astrophysics Data System (ADS)
Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders
2014-05-01
Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054 spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.
Conical-Domain Model for Estimating GPS Ionospheric Delays
NASA Technical Reports Server (NTRS)
Sparks, Lawrence; Komjathy, Attila; Mannucci, Anthony
2009-01-01
The conical-domain model is a computational model, now undergoing development, for estimating ionospheric delays of Global Positioning System (GPS) signals. Relative to the standard ionospheric delay model described below, the conical-domain model offers improved accuracy. In the absence of selective availability, the ionosphere is the largest source of error for single-frequency users of GPS. Because ionospheric signal delays contribute to errors in GPS position and time measurements, satellite-based augmentation systems (SBASs) have been designed to estimate these delays and broadcast corrections. Several national and international SBASs are currently in various stages of development to enhance the integrity and accuracy of GPS measurements for airline navigation. In the Wide Area Augmentation System (WAAS) of the United States, slant ionospheric delay errors and confidence bounds are derived from estimates of vertical ionospheric delay modeled on a grid at regularly spaced intervals of latitude and longitude. The estimate of vertical delay at each ionospheric grid point (IGP) is calculated from a planar fit of neighboring slant delay measurements, projected to vertical using a standard, thin-shell model of the ionosphere. Interpolation on the WAAS grid enables estimation of the vertical delay at the ionospheric pierce point (IPP) corresponding to any arbitrary measurement of a user. (The IPP of a given user s measurement is the point where the GPS signal ray path intersects a reference ionospheric height.) The product of the interpolated value and the user s thin-shell obliquity factor provides an estimate of the user s ionospheric slant delay. Two types of error that restrict the accuracy of the thin-shell model are absent in the conical domain model: (1) error due to the implicit assumption that the electron density is independent of the azimuthal angle at the IPP and (2) error arising from the slant-to-vertical conversion. At low latitudes or at mid-latitudes under disturbed conditions, the accuracy of SBAS systems based upon the thin-shell model suffers due to the presence of complex ionospheric structure, high delay values, and large electron density gradients. Interpolation on the vertical delay grid serves as an additional source of delay error. The conical-domain model permits direct computation of the user s slant delay estimate without the intervening use of a vertical delay grid. The key is to restrict each fit of GPS measurements to a spatial domain encompassing signals from only one satellite. The conical domain model is so named because each fit involves a group of GPS receivers that all receive signals from the same GPS satellite (see figure); the receiver and satellite positions define a cone, the satellite position being the vertex. A user within a given cone evaluates the delay to the satellite directly, using (1) the IPP coordinates of the line of sight to the satellite and (2) broadcast fit parameters associated with the cone. The conical-domain model partly resembles the thin-shell model in that both models reduce an inherently four-dimensional problem to two dimensions. However, unlike the thin-shell model, the conical domain model does not involve any potentially erroneous simplifying assumptions about the structure of the ionosphere. In the conical domain model, the initially four-dimensional problem becomes truly two-dimensional in the sense that once a satellite location has been specified, any signal path emanating from a satellite can be identified by only two coordinates; for example, the IPP coordinates. As a consequence, a user s slant-delay estimate converges to the correct value in the limit that the receivers converge to the user s location (or, equivalently, in the limit that the measurement IPPs converge to the user s IPP).
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
NASA Astrophysics Data System (ADS)
Kaleita, A. L.
2013-12-01
Identifying field-scale soil moisture patterns, and quantifying their impact on hydrology and nutrient flux, is currently limited by the time and resources required to do sufficient monitoring. A small number of monitoring locations or occasions may not be sufficient to capture the true spatial and temporal dynamics of these patterns. While process models can help to fill in data gaps, it is often difficult if not impossible to effectively parameterize them at the field and sub-field scale. Thus, empirical methods that can optimize sampling and mapping of soil moisture by using a minimal amount of readily available data may be of significant value. LiDAR is one source of such readily available data. Various topographic indices, including relative elevation, land slope, curvature, and slope aspect are known to influence soil moisture patterns, though the exact nature of that relationship appears to vary from study to study. The objective of this study was to use these data to identify critical sampling locations for mapping soil moisture, and to upscale point measurements at those locations to both a single field-average value, and to a high-resolution pattern map for the field. This study analyzed in-situ soil moisture measurements from the working agricultural field in Story County, Iowa. Theta probe soil moisture measurement values were taken every 50 meters on a 300 x 250 meter grid (~18 acres) during the summer growing seasons of 2004, 2005, 2007, and 2008. The elevation in the field varies by approximately 5 meters and the grid covers six different soil types and a variety of different landscape positions throughout the field. We used self-organizing maps (SOMs) and K-means clustering algorithms to split apart the field study area into distinct categories of similarly-characterized locations. We then used the SOM and clustering metrics to identify locations within each group that were representative of the behavior of that group of locations. We developed a weighted upscaling process to estimate a whole-field average soil moisture content from these few critical samples, and we compared the results to those obtained through the more traditional 'temporal stability' approach. The cluster-based approach was as good as and often better than the temporal stability approach, with the significant advantage that the former does not require any initial period of exhaustive soil moisture monitoring, whereas the latter does. A second objective was to use the classification results of the landscape data to interpolate these sparse critical sampling point data over the whole field. Using what we term 'feature-space interpolation' we were able to re-create a high-resolution soil moisture map for the field using only three measurements, by giving locations with similar landscape characteristics similar soil moisture values. The results showed a small but significant statistical improvement over traditional distance-based interpolation methods, and the resulting patterns also had stronger correlation with end-of-season yield, suggesting this approach may have valuable applications in production agriculture decision-making and assessment.
NASA Astrophysics Data System (ADS)
Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.
2014-05-01
The purpose of this study is to examine the use of Artificial Neural Networks (ANN) combined with kriging interpolation method, in order to simulate the hydraulic head both spatially and temporally. Initially, ANNs are used for the temporal simulation of the hydraulic head change. The results of the most appropriate ANNs, determined through a fuzzy logic system, are used as an input for the kriging algorithm where the spatial simulation is conducted. The proposed algorithm is tested in an area located across Isar River in Bayern, Germany and covers an area of approximately 7800 km2. The available data extend to a time period from 1/11/2008 to 31/10/2012 (1460 days) and include the hydraulic head at 64 wells, temperature and rainfall at 7 weather stations and surface water elevation at 5 monitoring stations. One feedforward ANN was trained for each of the 64 wells, where hydraulic head data are available, using a backpropagation algorithm. The most appropriate input parameters for each wells' ANN are determined considering their proximity to the measuring station, as well as their statistical characteristics. For the rainfall, the data for two consecutive time lags for best correlated weather station, as well as a third and fourth input from the second best correlated weather station, are used as an input. The surface water monitoring stations with the three best correlations for each well are also used in every case. Finally, the temperature for the best correlated weather station is used. Two different architectures are considered and the one with the best results is used henceforward. The output of the ANNs corresponds to the hydraulic head change per time step. These predictions are used in the kriging interpolation algorithm. However, not all 64 simulated values should be used. The appropriate neighborhood for each prediction point is constructed based not only on the distance between known and prediction points, but also on the training and testing error of the ANN. Therefore, the neighborhood of each prediction point is the best available. Then, the appropriate variogram is determined, by fitting the experimental variogram to a theoretical variogram model. Three models are examined, the linear, the exponential and the power-law. Finally, the hydraulic head change is predicted for every grid cell and for every time step used. All the algorithms used were developed in Visual Basic .NET, while the visualization of the results was performed in MATLAB using the .NET COM Interoperability. The results are evaluated using leave one out cross-validation and various performance indicators. The best results were achieved by using ANNs with two hidden layers, consisting of 20 and 15 nodes respectively and by using power-law variogram with the fuzzy logic system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. P. Jensen; Toto, T.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less
Latysh, Natalie E.; Wetherbee, Gregory Alan
2012-01-01
High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.
Latysh, Natalie E; Wetherbee, Gregory Alan
2012-01-01
High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.
A Data Assimilation System For Operational Weather Forecast In Galicia Region (nw Spain)
NASA Astrophysics Data System (ADS)
Balseiro, C. F.; Souto, M. J.; Pérez-Muñuzuri, V.; Brewster, K.; Xue, M.
Regional weather forecast models, such as the Advanced Regional Prediction System (ARPS), over complex environments with varying local influences require an accurate meteorological analysis that should include all local meteorological measurements available. In this work, the ARPS Data Analysis System (ADAS) (Xue et al. 2001) is applied as a three-dimensional weather analysis tool to include surface station and rawinsonde data with the NCEP AVN forecasts as the analysis background. Currently in ADAS, a set of five meteorological variables are considered during the analysis: horizontal grid-relative wind components, pressure, potential temperature and spe- cific humidity. The analysis is used for high resolution numerical weather prediction for the Galicia region. The analysis method used in ADAS is based on the successive corrective scheme of Bratseth (1986), which asymptotically approaches the result of a statistical (optimal) interpolation, but at lower computational cost. As in the optimal interpolation scheme, the Bratseth interpolation method can take into account the rel- ative error between background and observational data, therefore they are relatively insensitive to large variations in data density and can integrate data of mixed accuracy. This method can be applied economically in an operational setting, providing signifi- cant improvement over the background model forecast as well as any analysis without high-resolution local observations. A one-way nesting is applied for weather forecast in Galicia region, and the use of this assimilation system in both domains shows better results not only in initial conditions but also in all forecast periods. Bratseth, A.M. (1986): "Statistical interpolation by means of successive corrections." Tellus, 38A, 439-447. Souto, M. J., Balseiro, C. F., Pérez-Muñuzuri, V., Xue, M. Brewster, K., (2001): "Im- pact of cloud analysis on numerical weather prediction in the galician region of Spain". Submitted to Journal of Applied Meteorology. Xue, M., Wang. D., Gao, J., Brewster, K, Droegemeier, K. K., (2001): "The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation". Meteor. Atmos Physics. Accepted
Introducing Object-Oriented Concepts into GSI
NASA Technical Reports Server (NTRS)
Guo, Jing; Todling, Ricardo
2017-01-01
Enhancements are now being made to the Gridpoint Statistical Interpolation (GSI) data assimilation system to expand its capabilities. This effort opens the way for broadening the scope of GSI's applications by using some standard object-oriented features in Fortran, and represents a starting point for the so-called GSI refactoring, as a part of the Joint Effort for Data-assimilationI ntegration (JEDI) project of JCSDA.
High-Reynolds Number Viscous Flow Simulations on Embedded-Boundary Cartesian Grids
2016-05-05
d ) 2 χ ≥ 0 −cw1 ( ν̃d ) 2 otherwise (6) 2 DISTRIBUTION A: Distribution approved for public release. with νt = ν̃fv1 and the usual definitions of fw...1 The wall function is coupled to the underlying Cartesian grid through its endpoints. This is illustrated schematically in Fig. 2 . At the wall it is...by interpolation from the Cartesian grid . This eliminates the problem of uτ → 0 , since this works in physical coordinates and not plus coordinates. We
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koner, Debasish; Panda, Aditya N., E-mail: adi07@iitg.ernet.in; Barrios, Lizandra
2016-01-21
Initial state selected dynamics of the Ne + NeH{sup +}(v{sub 0} = 0, j{sub 0} = 0) → NeH{sup +} + Ne reaction is investigated by quantum and statistical quantum mechanical (SQM) methods on the ground electronic state. The three-body ab initio energies on a set of suitably chosen grid points have been computed at CCSD(T)/aug-cc-PVQZ level and analytically fitted. The fitting of the diatomic potentials, computed at the same level of theory, is performed by spline interpolation. A collinear [NeHNe]{sup +} structure lying 0.72 eV below the Ne + NeH{sup +} asymptote is found to be the most stablemore » geometry for this system. Energies of low lying vibrational states have been computed for this stable complex. Reaction probabilities obtained from quantum calculations exhibit dense oscillatory structures, particularly in the low energy region and these get partially washed out in the integral cross section results. SQM predictions are devoid of oscillatory structures and remain close to 0.5 after the rise at the threshold thus giving a crude average description of the quantum probabilities. Statistical cross sections and rate constants are nevertheless in sufficiently good agreement with the quantum results to suggest an important role of a complex-forming dynamics for the title reaction.« less
Implementation and testing of the gridded Vienna Mapping Function 1 (VMF1)
NASA Astrophysics Data System (ADS)
Kouba, J.
2008-04-01
The new gridded Vienna Mapping Function (VMF1) was implemented and compared to the well-established site-dependent VMF1, directly and by using precise point positioning (PPP) with International GNSS Service (IGS) Final orbits/clocks for a 1.5-year GPS data set of 11 globally distributed IGS stations. The gridded VMF1 data can be interpolated for any location and for any time after 1994, whereas the site-dependent VMF1 data are only available at selected IGS stations and only after 2004. Both gridded and site-dependent VMF1 PPP solutions agree within 1 and 2 mm for the horizontal and vertical position components, respectively, provided that respective VMF1 hydrostatic zenith path delays (ZPD) are used for hydrostatic ZPD mapping to slant delays. The total ZPD of the gridded and site-dependent VMF1 data agree with PPP ZPD solutions with RMS of 1.5 and 1.8 cm, respectively. Such precise total ZPDs could provide useful initial a priori ZPD estimates for kinematic PPP and regional static GPS solutions. The hydrostatic ZPDs of the gridded VMF1 compare with the site-dependent VMF1 ZPDs with RMS of 0.3 cm, subject to some biases and discontinuities of up to 4 cm, which are likely due to different strategies used in the generation of the site-dependent VMF1 data. The precision of gridded hydrostatic ZPD should be sufficient for accurate a priori hydrostatic ZPD mapping in all precise GPS and very long baseline interferometry (VLBI) solutions. Conversely, precise and globally distributed geodetic solutions of total ZPDs, which need to be linked to VLBI to control biases and stability, should also provide a consistent and stable reference frame for long-term and state-of-the-art numerical weather modeling.
NASA Astrophysics Data System (ADS)
Asal, F. F.
2012-07-01
Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical analysis has proven good potential in the DEM quality assessment.
A high-order spatial filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-04-01
A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.
2013-09-01
including the interaction effects between the fins and canards. 2. Solution Technique 2.1 Computational Aerodynamics The double-precision solver of a...and overset grids (unified-grid). • Total variation diminishing discretization based on a new multidimensional interpolation framework. • Riemann ... solvers to provide proper signal propagation physics including versions for preconditioned forms of the governing equations. • Consistent and
A Unified Air-Sea Visualization System: Survey on Gridding Structures
NASA Technical Reports Server (NTRS)
Anand, Harsh; Moorhead, Robert
1995-01-01
The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.
Point-by-point compositional analysis for atom probe tomography.
Stephenson, Leigh T; Ceguerra, Anna V; Li, Tong; Rojhirunsakool, Tanaporn; Nag, Soumya; Banerjee, Rajarshi; Cairney, Julie M; Ringer, Simon P
2014-01-01
This new alternate approach to data processing for analyses that traditionally employed grid-based counting methods is necessary because it removes a user-imposed coordinate system that not only limits an analysis but also may introduce errors. We have modified the widely used "binomial" analysis for APT data by replacing grid-based counting with coordinate-independent nearest neighbour identification, improving the measurements and the statistics obtained, allowing quantitative analysis of smaller datasets, and datasets from non-dilute solid solutions. It also allows better visualisation of compositional fluctuations in the data. Our modifications include:.•using spherical k-atom blocks identified by each detected atom's first k nearest neighbours.•3D data visualisation of block composition and nearest neighbour anisotropy.•using z-statistics to directly compare experimental and expected composition curves. Similar modifications may be made to other grid-based counting analyses (contingency table, Langer-Bar-on-Miller, sinusoidal model) and could be instrumental in developing novel data visualisation options.
A bivariate rational interpolation with a bi-quadratic denominator
NASA Astrophysics Data System (ADS)
Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu
2006-10-01
In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.
Long range lidar data processing for validating LES of wind turbine wakes
NASA Astrophysics Data System (ADS)
Trabucchi, D.; van Dooren, M.; Vollmer, L.; Schneemann, J.; Trujillo, J. J.; Witha, B.; Kühn, M.
2014-12-01
Scanning wind lidars offer the possibility to compare full-scale measurements in the wake of a wind turbine with LES wind fields calculated for the same test case. Due to the novelty and the peculiarity of lidar measurements, a comparison between experimental data and simulation results is non-trivial and several methods can be applied. This study presents validation methods for single and dual-doppler lidar measurements respectively.Consecutive azimuthal scans - commonly indicated as Plan Position Indicator (PPI) - at a low fixed elevation and centered on the wind turbine wake provide the radial wind speed, i.e. the wind component along the laser beam, on an almost flat polar grid. This data can be directly compared with the radial wind speed evaluated at the measurement point from the simulated wind field. This approach provides a detailed spatial description of the wind field and can be applied to averaged data for steady analysis. For the comparison with LES results, time average and spatial interpolation of the computed wind field are needed. Moreover, a proper wind direction should be chosen to evaluate the radial wind speed.With two lidars performing consecutive PPI scans over the same region from different places it is possible to estimate the horizontal wind field where the scanned regions overlap. Due to the limits in the synchronization of the PPI scans by the lidars, only steady analysis based on time averaged data can be done. A horizontal grid based on the one used for the LES is overlapped to the region covered by the two non-co-planar scans. The horizontal wind field at a considered point can be evaluated solving the system given by at least two non-aligned radial directions about this point. For each node, the data sampled by the lidars in a well defined volume during the considered time interval is used to write this system. Moreover, a discrete approximation of the continuity equation is applied to link the solutions for all the grid nodes. Instead of an interpolation on the LES wind field, this approach requires a temporal and vertical average over the considered time and height intervals.The application of these two approaches to lidar measurements performed in the offshore wind farm »alpha ventus« is presented in this work. The results are going to be used to evaluate different wind turbine wake models applied to LES.
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1965-01-01
Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.
Numerical solution of the full potential equation using a chimera grid approach
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1995-01-01
A numerical scheme utilizing a chimera zonal grid approach for solving the full potential equation in two spatial dimensions is described. Within each grid zone a fully-implicit approximate factorization scheme is used to advance the solution one interaction. This is followed by the explicit advance of all common zonal grid boundaries using a bilinear interpolation of the velocity potential. The presentation is highlighted with numerical results simulating the flow about a two-dimensional, nonlifting, circular cylinder. For this problem, the flow domain is divided into two parts: an inner portion covered by a polar grid and an outer portion covered by a Cartesian grid. Both incompressible and compressible (transonic) flow solutions are included. Comparisons made with an analytic solution as well as single grid results indicate that the chimera zonal grid approach is a viable technique for solving the full potential equation.
a Climatology of Global Precipitation.
NASA Astrophysics Data System (ADS)
Legates, David Russell
A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.
NASA Astrophysics Data System (ADS)
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.
NASA Astrophysics Data System (ADS)
Ojo, J. S.; Owolawi, P. A.
2014-10-01
Millimeter and microwave system design at higher frequencies require as input a 1-min rain-rate cumulative distribution function for estimating the level of degradation that can be encountered at such frequency bands. Owing to the lack of 1-min rain-rate data in South Africa and the availability of 5-min and hourly rainfall data, we have used rain-rate conversion models and the refined Moupfouma model to convert the available data into 1-min rain-rate statistics. The attenuation caused by these rain rates was predicted using the International Telecommunication Union (ITU) recommendations model. The Kriging interpolation method was used to draw contour maps over different percentages of time for spatial interpolation of rain-rate values into a regular grid in order to obtain a highly consistent and predictable inter-gauge rain-rate variation over South Africa. The present results will be useful for system designers of modern broadband wireless access (BWA) and high-density cell-based Ku/Ka, Q/V band satellite systems, over the desired area of coverage in order to determine the appropriate effective isotropically radiated power (EIRP) and receiver characteristics of this region.
netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data
NASA Astrophysics Data System (ADS)
Zender, C. S.
2015-12-01
Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.
Dastane, A; Vaidyanathan, T K; Vaidyanathan, J; Mehra, R; Hesby, R
1996-01-01
It is necessary to visualize and reconstruct tissue anatomic surfaces accurately for a variety of oral rehabilitation applications such as surface wear characterization and automated fabrication of dental restorations, accuracy of reproduction of impression and die materials, etc. In this investigation, a 3-D digitization and computer-graphic system was developed for surface characterization. The hardware consists of a profiler assembly for digitization in an MTS biomechanical test system with an artificial mouth, an IBM PS/2 computer model 70 for data processing and a Hewlett-Packard laser printer for hardcopy outputs. The software used includes a commercially available Surfer 3-D graphics package, a public domain data-fitting alignment software and an inhouse Pascal program for intercommunication plus some other limited tasks. Surfaces were digitized before and after rotation by angular displacement, the digital data were interpolated by Surfer to provide a data grid and the surfaces were computer graphically reconstructed: Misaligned surfaces were aligned by the data-fitting alignment software under different choices of parameters. The effect of different interpolation parameters (e.g. grid size, method of interpolation) and extent of rotation on the alignment accuracy was determined. The results indicate that improved alignment accuracy results from optimization of interpolation parameters and minimization of the initial misorientation between the digitized surfaces. The method provides important advantages for surface reconstruction and visualization, such as overlay of sequentially generated surfaces and accurate alignment of pairs of surfaces with small misalignment.
Dense image registration through MRFs and efficient linear programming.
Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos
2008-12-01
In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.
Development of full wave code for modeling RF fields in hot non-uniform plasmas
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis
2013-01-01
We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331
NASA Technical Reports Server (NTRS)
Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodrigues-Iturbe, I.
1986-01-01
Eight years of summer raingage observations are analyzed for a dense, 93 gage, network operated by the U. S. Department of Agriculture, Agricultural Research Service, in their 150 sq km Walnut Gulch catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon to noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storms, the 93 gage depths are interpolated onto a dense grid and the resulting random field is anlyzed. Presented are: storm depth isohyets at 2 mm contour intervals, first three moments of point storm depth, spatial correlation function, spatial variance function, and the spatial distribution of total rainstorm depth.
NASA Astrophysics Data System (ADS)
Claessens, M.; Möller, K.; Thiel, H. G.
1997-07-01
Computational fluid dynamics calculations for high- and low-current arcs in an interrupter of the self-blast type have been performed. The mixing process of the hot PTFE cloud with the cold 0022-3727/30/13/011/img6 in the pressure chamber is strongly inhomogeneous. The existence of two different species has been taken into account by interpolation of the material functions according to their mass fraction in each grid cell. Depending on the arcing time, fault current and interrupter geometry, blow temperatures of up to 2000 K have been found. The simulation results for a decaying arc immediately before current zero yield a significantly reduced arc cooling at the stagnation point for high blow temperatures.
What Will Science Gain From Mapping the World Ocean Floor?
NASA Astrophysics Data System (ADS)
Jakobsson, M.
2017-12-01
It is difficult to estimate how much of the World Ocean floor topography (bathymetry) that has been mapped. Estimates range from a few to more than ten percent of the World Ocean area. The most recent version of the bathymetric grid compiled by the General Bathymetric Chart of the Oceans (GEBCO) has bathymetric control points in 18% of the 30 x 30 arc second large grid cells. The depth values for the rest of the cells are obtained through interpolation guided by satellite altimetry in deep water. With this statistic at hand, it seems tenable to suggest that there are many scientific discoveries to be made from a complete high-resolution mapping of the World Ocean floor. In this presentation, some of our recent scientific discoveries based on modern multibeam bathymetric mapping will be highlighted and discussed. For example, how multibeam mapping provided evidence for a km-thick ice shelf covering the entire Arctic Ocean during peak glacial conditions, a hypothesis proposed nearly half a century ago, and how groundwater escape features are visible in high-resolution bathymetry in the Baltic Sea, with potential implications for the freshwater budget and distribution of nutrients and pollutants. Presented examples will be placed in the context of mapping resolution, systematic surveys versus mapping along transits, and scientific hypothesis driven mapping versus ocean exploration. The newly announced Nippon Foundation - GEBCO Seabed 2030 project has the vision to map 100% of the World Ocean floor mapped by 2030. Are there specific scientific areas where we can expect new discoveries from all mapping data collected through the Seabed 2030 project? Are there outstanding hypothesis that can be tested from a fully mapped World Ocean floor?
NASA Astrophysics Data System (ADS)
Pompe, L.; Clausen, B. L.; Morton, D. M.
2014-12-01
The Cretaceous northern Peninsular Ranges batholith (PRB) exemplifies emplacement in a combination oceanic arc / continental margin arc setting. Two approaches that can aid in understanding its statistical and spatial geochemistry variation are principle component analysis (PCA) and GIS interpolation mapping. The data analysis primarily used 287 samples from the large granitoid geochemical data set systematically collected by Baird and Welday. Of these, 80 points fell in the western Santa Ana block, 108 in the transitional Perris block, and 99 in the eastern San Jacinto block. In the statistical analysis, multivariate outliers were identified using Mahalanobis distance and excluded. A centered log ratio transformation was used to facilitate working with geochemical concentration values that range over many orders of magnitude. The data was then analyzed using PCA with IBM SPSS 21 reducing 40 geochemical variables to 4 components which are approximately related to the compatible, HFS, HRE, and LIL elements. The 4 components were interpreted as follows: (1) compatible [and negatively correlated incompatible] elements indicate extent of differentiation as typified by SiO2, (2) HFS elements indicate crustal contamination as typified by Sri and Nb/Yb ratios, (3) HRE elements indicate source depth as typified by Sr/Y and Gd/Yb ratios, and (4) LIL elements indicate alkalinity as typified by the K2O/SiO2ratio. Spatial interpolation maps of the 4 components were created with Esri ArcGIS for Desktop 10.2 by interpolating between the sample points using kriging and inverse distance weighting. Across-arc trends on the interpolation maps indicate a general increase from west to east for each of the 4 components, but with local exceptions as follows. The 15km offset on the San Jacinto Fault may be affecting the contours. South of San Jacinto is a west-east band of low Nb/Yb, Gd/Yb, and Sr/Y ratios. The highest Sr/Y ratios in the north central area that decrease further east may be due to the far eastern granitoids being transported above a shear zone. Along the western edge of the PRB, high SiO2 and K2O/SiO2 are interpreted to result from sampling shallow levels in the batholith (2-3 kb), as compared to deeper levels in the central (5-6 kb) and eastern (4.5 kb) areas.
Empirical study on human acupuncture point network
NASA Astrophysics Data System (ADS)
Li, Jian; Shen, Dan; Chang, Hui; He, Da-Ren
2007-03-01
Chinese medical theory is ancient and profound, however is confined by qualitative and faint understanding. The effect of Chinese acupuncture in clinical practice is unique and effective, and the human acupuncture points play a mysterious and special role, however there is no modern scientific understanding on human acupuncture points until today. For this reason, we attend to use complex network theory, one of the frontiers in the statistical physics, for describing the human acupuncture points and their connections. In the network nodes are defined as the acupuncture points, two nodes are connected by an edge when they are used for a medical treatment of a common disease. A disease is defined as an act. Some statistical properties have been obtained. The results certify that the degree distribution, act degree distribution, and the dependence of the clustering coefficient on both of them obey SPL distribution function, which show a function interpolating between a power law and an exponential decay. The results may be helpful for understanding Chinese medical theory.
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Grid adaption using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Grid adaptation using chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaptation using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Theoretical oscillation frequencies for solar-type dwarfs from stellar models with 〈3D〉-atmospheres
NASA Astrophysics Data System (ADS)
Jørgensen, Andreas Christ Sølvsten; Weiss, Achim; Mosumgaard, Jakob Rørsted; Silva Aguirre, Victor; Sahlholdt, Christian Lundsgaard
2017-12-01
We present a new method for replacing the outermost layers of stellar models with interpolated atmospheres based on results from 3D simulations, in order to correct for structural inadequacies of these layers. This replacement is known as patching. Tests, based on 3D atmospheres from three different codes and interior models with different input physics, are performed. Using solar models, we investigate how different patching criteria affect the eigenfrequencies. These criteria include the depth, at which the replacement is performed, the quantity, on which the replacement is based, and the mismatch in Teff and log g between the un-patched model and patched 3D atmosphere. We find the eigenfrequencies to be unaltered by the patching depth deep within the adiabatic region, while changing the patching quantity or the employed atmosphere grid leads to frequency shifts that may exceed 1 μHz. Likewise, the eigenfrequencies are sensitive to mismatches in Teff or log g. A thorough investigation of the accuracy of a new scheme, for interpolating mean 3D stratifications within the atmosphere grids, is furthermore performed. Throughout large parts of the atmosphere grids, our interpolation scheme yields sufficiently accurate results for the purpose of asteroseismology. We apply our procedure in asteroseismic analyses of four Kepler stars and draw the same conclusions as in the solar case: Correcting for structural deficiencies lowers the eigenfrequencies, this correction is slightly sensitive to the patching criteria, and the remaining frequency discrepancy between models and observations is less frequency dependent. Our work shows the applicability and relevance of patching in asteroseismology.
Restoring method for missing data of spatial structural stress monitoring based on correlation
NASA Astrophysics Data System (ADS)
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
NASA Astrophysics Data System (ADS)
Mowlavi, N.; Eggenberger, P.; Meynet, G.; Ekström, S.; Georgy, C.; Maeder, A.; Charbonnel, C.; Eyer, L.
2012-05-01
Aims: We present dense grids of stellar models suitable for comparison with observable quantities measured with great precision, such as those derived from binary systems or planet-hosting stars. Methods: We computed new Geneva models without rotation at metallicities Z = 0.006, 0.01, 0.014, 0.02, 0.03, and 0.04 (i.e. [Fe/H] from -0.33 to +0.54) and with mass in small steps from 0.5 to 3.5 M⊙. Great care was taken in the procedure for interpolating between tracks in order to compute isochrones. Results: Several properties of our grids are presented as a function of stellar mass and metallicity. Those include surface properties in the Hertzsprung-Russell diagram, internal properties including mean stellar density, sizes of the convective cores, and global asteroseismic properties. Conclusions: We checked our interpolation procedure and compared interpolated tracks with computed tracks. The deviations are less than 1% in radius and effective temperatures for most of the cases considered. We also checked that the present isochrones provide nice fits to four couples of observed detached binaries and to the observed sequences of the open clusters NGC 3532 and M 67. Including atomic diffusion in our models with M < 1.1 M⊙ leads to variations in the surface abundances that should be taken into account when comparing with observational data of stars with measured metallicities. For that purpose, iso-Zsurf lines are computed. These can be requested for download from a dedicated web page, together with tracks at masses and metallicities within the limits covered by the grids. The validity of the relations linking Z and [Fe/H] is also re-assessed in light of the surface abundance variations in low-mass stars. Table D.1 for the basic tracks is available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/541/A41, and on our web site http://obswww.unige.ch/Recherche/evol/-Database-. Tables for interpolated tracks, iso-Zsurf lines and isochrones can be computed, on demand, from our web site.Appendices are available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2017-02-01
A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.
On the interpolation of volumetric water content in research catchments
NASA Astrophysics Data System (ADS)
Dlamini, Phesheya; Chaplot, Vincent
Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Three-Dimensional Unsteady Separation at Low Reynolds Numbers
1990-07-01
novel, robust adaptive- grid technique for incompressible flow (Shen & Reed 1990a "Shepard’s Interpolation for Solution-Adaptive Methods" submitted to...3-D adaptive- grid schemes developed for flat plate for full, unsteady, incompressible Navier Stokes. 4. 2-D and 3-D unsteady, vortex-lattice code...perforated to tailor suction through wall. Honeycomb and contractiong uide flow uniformly crons "a dn muwet a m Fiur32 c ic R n R ev lving -disc seals
Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...
2016-08-04
This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less
Blended sea level anomaly fields with enhanced coastal coverage along the U.S. West Coast
Risien, C.M.; Strub, P.T.
2016-01-01
We form a new ‘blended’ data set of sea level anomaly (SLA) fields by combining gridded daily fields derived from altimeter data with coastal tide gauge data. Within approximately 55–70 km of the coast, the altimeter data are discarded and replaced by a linear interpolation between the tide gauge and remaining offshore altimeter data. To create a common reference height for altimeter and tide gauge data, a 20-year mean is subtracted from each time series (from each tide gauge and altimeter grid point) before combining the data sets to form a blended mean sea level anomaly (SLA) data set. Daily mean fields are produced for the 22-year period 1 January 1993–31 December 2014. The primary validation compares geostrophic velocities calculated from the height fields and velocities measured at four moorings covering the north-south range of the new data set. The blended data set improves the alongshore (meridional) component of the currents, indicating an improvement in the cross-shelf gradient of the mean SLA data set. PMID:26927667
Anlauf, Ruediger; Schaefer, Jenny; Kajitvichyanukul, Puangrat
2018-07-01
HYDRUS-1D is a well-established reliable instrument to simulate water and pesticide transport in soils. It is, however, a point-specific model which is usually used for site-specific simulations. Aim of the investigation was the development of pesticide accumulation and leaching risk maps for regions combining HYDRUS-1D as a model for pesticide fate with regional data in a geographical information system (GIS). It was realized in form of a python tool in ArcGIS. Necessary high resolution local soil information, however, is very often not available. Therefore, worldwide interpolated 250-m-grid soil data (SoilGrids.org) were successfully incorporated to the system. The functionality of the system is shown by examples from Thailand, where example regions that differ in soil properties and climatic conditions were exposed in the model system to pesticides with different properties. A practical application of the system will be the identification of areas where measures to optimize pesticide use should be implemented with priority. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new method for estimating carbon dioxide emissions from transportation at fine spatial scales
Shu, Yuqin; Reams, Margaret
2016-01-01
Detailed estimates of carbon dioxide (CO2) emissions at fine spatial scales are useful to both modelers and decision makers who are faced with the problem of global warming and climate change. Globally, transport related emissions of carbon dioxide are growing. This letter presents a new method based on the volume-preserving principle in the areal interpolation literature to disaggregate transportation-related CO2 emission estimates from the county-level scale to a 1 km2 grid scale. The proposed volume-preserving interpolation (VPI) method, together with the distance-decay principle, were used to derive emission weights for each grid based on its proximity to highways, roads, railroads, waterways, and airports. The total CO2 emission value summed from the grids within a county is made to be equal to the original county-level estimate, thus enforcing the volume-preserving property. The method was applied to downscale the transportation-related CO2 emission values by county (i.e. parish) for the state of Louisiana into 1 km2 grids. The results reveal a more realistic spatial pattern of CO2 emission from transportation, which can be used to identify the emission ‘hot spots’. Of the four highest transportation-related CO2 emission hotspots in Louisiana, high-emission grids literally covered the entire East Baton Rouge Parish and Orleans Parish, whereas CO2 emission in Jefferson Parish (New Orleans suburb) and Caddo Parish (city of Shreveport) were more unevenly distributed. We argue that the new method is sound in principle, flexible in practice, and the resultant estimates are more accurate than previous gridding approaches. PMID:26997973
Quadratic polynomial interpolation on triangular domain
NASA Astrophysics Data System (ADS)
Li, Ying; Zhang, Congcong; Yu, Qian
2018-04-01
In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.
NASA Astrophysics Data System (ADS)
van Osnabrugge, Bart; Weerts, Albrecht; Uijlenhoet, Remko
2017-04-01
Gridded areal precipitation, as one of the most important hydrometeorological input variables for initial state estimation in operational hydrological forecasting, is available in the form of raster data sets (e.g. HYRAS and EOBS) for the River Rhine basin. These datasets are compiled off-line on a daily time step using station data with the highest possible spatial density. However, such a product is not available operationally and at an hourly discretisation. Therefore, we constructed an hourly gridded precipitation dataset at 1.44 km2 resolution for the Rhine basin for the period from 1998 to present using a REGNIE-like interpolation procedure (Weerts et al., 2008) using a low and a high density rain gauge network. The datasets were validated against daily HYRAS (Rauthe, 2013) and EOBS (Haylock, 2008) data. The main goal of the operational procedure is to emulate the HYRAS dataset as good as possible, as the daily HYRAS dataset is used in the off-line calibration of the hydrological model. Our main findings are that even with low station density, the spatial patterns found in the HYRAS data set are well reproduced. With low station density (years 1999-2006) our dataset underestimates precipitation compared to HYRAS and EOBS, notably during the winter. However, interpolation based on the same set of stations overestimates precipitation compared to EOBS for the years 2006-2014. This discrepancy disappears when switching to the high station density. We also analyze the robustness of the hourly precipitation fields by comparing with stations not used during interpolation. Specific issues regarding the data when creating the gridded precipitation fields will be highlighted. Finally, the datasets are used to drive an hourly and daily gridded WFLOW_HBV model of the Rhine at the same spatial resolution. Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, doi:10.1029/2008JD10201 Rauthe, M., Steiner, H., Riediger, U., Mazurkiewicz, A., Gratzki, A. 2013: A Central European precipitation climatology - Part 1: Generation and validation of a high-resolution gridded daily data set (HYRAS). Meteorologische Zeitschrift, 22(3), 235 256 Weerts, A.H., D. Meißner, and S. Rademacher, 2008. Input data rainfall-runoff model operational system FEWS-NL & FEWS-DE. Technical report, Deltares.
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Laaha, Gregor; Koffler, Daniel; Blöschl, Günter; Pebesma, Edzer; Parajka, Juraj; Viglione, Alberto
2013-04-01
Geostatistical methods have been applied only to a limited extent for spatial interpolation in applications where the observations have an irregular support, such as runoff characteristics or population health data. Several studies have shown the potential of such methods (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006, Goovaerts 2008), but these developments have so far not led to easily accessible, versatile, easy to apply and open source software. Based on the top-kriging approach suggested by Skøien et al. (2006), we will here present the package rtop, which has been implemented in the statistical environment R (R Core Team 2012). Taking advantage of the existing methods in R for analysis of spatial objects (Bivand et al. 2008), and the extensive possibilities for visualizing the results, rtop makes it easy to apply geostatistical interpolation methods when observations have a non-point spatial support. Although the package is flexible regarding data input, the main application so far has been for interpolation along river networks. We will present some examples showing how the package can easily be used for such interpolation. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Bivand, R.S., Pebesma, E.J. & Gómez-Rubio, V., 2008. Applied spatial data analysis with r: Springer. Goovaerts, P., 2008. Kriging and semivariogram deconvolution in the presence of irregular geographical units. Mathematical Geosciences, 40 (1), 101-128. Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Core Team, 2012. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.
The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework
NASA Astrophysics Data System (ADS)
Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.
2016-12-01
The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During the 6th Session of the UN-GGIM in August 2016 the role of DGGS in the context of the GSGF was formally acknowledged. This paper proposes to highlight the synergies and role of DGGS in the Global Statistical Geospatial Framework and to show examples of the use of DGGS to combine geospatial statistics with traditional geoscientific data.
Objective high Resolution Analysis over Complex Terrain with VERA
NASA Astrophysics Data System (ADS)
Mayer, D.; Steinacker, R.; Steiner, A.
2012-04-01
VERA (Vienna Enhanced Resolution Analysis) is a model independent, high resolution objective analysis of meteorological fields over complex terrain. This system consists of a special developed quality control procedure and a combination of an interpolation and a downscaling technique. Whereas the so called VERA-QC is presented at this conference in the contribution titled "VERA-QC, an approved Data Quality Control based on Self-Consistency" by Andrea Steiner, this presentation will focus on the method and the characteristics of the VERA interpolation scheme which enables one to compute grid point values of a meteorological field based on irregularly distributed observations and topography related aprior knowledge. Over a complex topography meteorological fields are not smooth in general. The roughness which is induced by the topography can be explained physically. The knowledge about this behavior is used to define the so called Fingerprints (e.g. a thermal Fingerprint reproducing heating or cooling over mountainous terrain or a dynamical Fingerprint reproducing positive pressure perturbation on the windward side of a ridge) under idealized conditions. If the VERA algorithm recognizes patterns of one or more Fingerprints at a few observation points, the corresponding patterns are used to downscale the meteorological information in a greater surrounding. This technique allows to achieve an analysis with a resolution much higher than the one of the observational network. The interpolation of irregularly distributed stations to a regular grid (in space and time) is based on a variational principle applied to first and second order spatial and temporal derivatives. Mathematically, this can be formulated as a cost function that is equivalent to the penalty function of a thin plate smoothing spline. After the analysis field has been divided into the Fingerprint components and the unexplained part respectively, the requirement of a smooth distribution is applied to the latter component only (the Fingerprint field is rough by definition). In order to obtain the final analysis field, the unexplained component has to be combined with the weighted Fingerprint patterns. Operationally, VERA is carried out at our Department on an hourly basis analyzing temperature measurements, pressure, wind and precipitation observations for several domains of the whole world. VERA analyses are used for nowcasting purposes, for establishing climate databases and model verification. Furthermore, VERA can be interesting for everyone who possesses a PC but does not have access to a complex data assimilation system which is in general only available at numerical weather prediction centers.
An interactive grid generation procedure for axial and radial flow turbomachinery
NASA Technical Reports Server (NTRS)
Beach, Timothy A.
1989-01-01
A combination algebraic/elliptic technique is presented for the generation of three dimensional grids about turbo-machinery blade rows for both axial and radial flow machinery. The technique is built around use of an advanced engineering workstation to construct several two dimensional grids interactively on predetermined blade-to-blade surfaces. A three dimensional grid is generated by interpolating these surface grids onto an axisymmetric grid. On each blade-to-blade surface, a grid is created using algebraic techniques near the blade to control orthogonality within the boundary layer region and elliptic techniques in the mid-passage to achieve smoothness. The interactive definition of bezier curves as internal boundaries is the key to simple construction. This procedure lends itself well to zonal grid construction, an important example being the tip clearance region. Calculations done to date include a space shuttle main engine turbopump blade, a radial inflow turbine blade, and the first stator of the United Technologies Research Center large scale rotating rig. A finite Navier-Stokes solver was used in each case.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.
2014-01-01
High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.
Algorithms for the automatic generation of 2-D structured multi-block grids
NASA Technical Reports Server (NTRS)
Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.
1995-01-01
Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
A revised ground-motion and intensity interpolation scheme for shakemap
Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.
2010-01-01
We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.
Kittel, T.G.F.; Rosenbloom, N.A.; Royle, J. Andrew; Daly, Christopher; Gibson, W.P.; Fisher, H.H.; Thornton, P.; Yates, D.N.; Aulenbach, S.; Kaufman, C.; McKeown, R.; Bachelet, D.; Schimel, D.S.; Neilson, R.; Lenihan, J.; Drapek, R.; Ojima, D.S.; Parton, W.J.; Melillo, J.M.; Kicklighter, D.W.; Tian, H.; McGuire, A.D.; Sykes, M.T.; Smith, B.; Cowling, S.; Hickler, T.; Prentice, I.C.; Running, S.; Hibbard, K.A.; Post, W.M.; King, A.W.; Smith, T.; Rizzo, B.; Woodward, F.I.
2004-01-01
Analysis and simulation of biospheric responses to historical forcing require surface climate data that capture those aspects of climate that control ecological processes, including key spatial gradients and modes of temporal variability. We developed a multivariate, gridded historical climate dataset for the conterminous USA as a common input database for the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP), a biogeochemical and dynamic vegetation model intercomparison. The dataset covers the period 1895-1993 on a 0.5?? latitude/longitude grid. Climate is represented at both monthly and daily timesteps. Variables are: precipitation, mininimum and maximum temperature, total incident solar radiation, daylight-period irradiance, vapor pressure, and daylight-period relative humidity. The dataset was derived from US Historical Climate Network (HCN), cooperative network, and snowpack telemetry (SNOTEL) monthly precipitation and mean minimum and maximum temperature station data. We employed techniques that rely on geostatistical and physical relationships to create the temporally and spatially complete dataset. We developed a local kriging prediction model to infill discontinuous and limited-length station records based on spatial autocorrelation structure of climate anomalies. A spatial interpolation model (PRISM) that accounts for physiographic controls was used to grid the infilled monthly station data. We implemented a stochastic weather generator (modified WGEN) to disaggregate the gridded monthly series to dailies. Radiation and humidity variables were estimated from the dailies using a physically-based empirical surface climate model (MTCLIM3). Derived datasets include a 100 yr model spin-up climate and a historical Palmer Drought Severity Index (PDSI) dataset. The VEMAP dataset exhibits statistically significant trends in temperature, precipitation, solar radiation, vapor pressure, and PDSI for US National Assessment regions. The historical climate and companion datasets are available online at data archive centers. ?? Inter-Research 2004.
Merging gauge and satellite rainfall with specification of associated uncertainty across Australia
NASA Astrophysics Data System (ADS)
Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish
2013-08-01
Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.
Advanced analysis of forest fire clustering
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail; Pereira, Mario; Golay, Jean
2017-04-01
Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.
Stalk, Chelsea A.; DeWitt, Nancy T.; Kindinger, Jack L.; Flocks, James G.; Reynolds, Billy J.; Kelso, Kyle W.; Fredericks, Joseph J.; Tuten, Thomas M.
2017-03-10
As part of the Barrier Island Comprehensive Monitoring Program (BICM), scientists from the U.S. Geological Survey (USGS) St. Petersburg Coastal and Marine Science Center conducted a nearshore single-beam bathymetry survey along the south-central coast of Louisiana, from Raccoon Point to Point Au Fer Island, in July 2015. The goal of the BICM program is to provide long-term data on Louisiana’s coastline and use this data to plan, design, evaluate, and maintain current and future barrier island restoration projects. The data described in this report will provide baseline bathymetric information for future research investigating island evolution, sediment transport, and recent and long-term geomorphic change, and will support modeling of future changes in response to restoration and storm impacts. The survey area encompasses more than 300 square kilometers of nearshore environment from Raccoon Point to Point Au Fer Island. This data series serves as an archive of processed single-beam bathymetry data, collected from July 22–29, 2015, under USGS Field Activity Number 2015-320-FA. Geographic information system data products include a 200-meter-cell-size interpolated bathymetry grid, trackline maps, and point data files. Additional files include error analysis maps, Field Activity Collection System logs, and formal Federal Geographic Data Committee metadata.
Elliptic surface grid generation in three-dimensional space
NASA Technical Reports Server (NTRS)
Kania, Lee
1992-01-01
A methodology for surface grid generation in three dimensional space is described. The method solves a Poisson equation for each coordinate on arbitrary surfaces using successive line over-relaxation. The complete surface curvature terms were discretized and retained within the nonhomogeneous term in order to preserve surface definition; there is no need for conventional surface splines. Control functions were formulated to permit control of grid orthogonality and spacing. A method for interpolation of control functions into the domain was devised which permits their specification not only at the surface boundaries but within the interior as well. An interactive surface generation code which makes use of this methodology is currently under development.
Procedure for locating 10 km UTM grid on Alabama County general highway maps
NASA Technical Reports Server (NTRS)
Paludan, C. T. N.
1975-01-01
Each county highway map has a geographic grid of degrees and tens of minutes in both longitude and latitude in the margins and within the map as intersection crosses. These will be used to locate the universal transverse mercator (UTM) grid at 10 km intervals. Since the maps used may have stretched or shrunk in height and/or width, interpolation should be done between the 10 min intersections when possible. A table of UTM coordinates of 10 min intersections is required and included. In Alabama, all eastings are referred to a false easting of 500,000 m at 87 deg W longitude (central meridian, CM).
Applications and Improvement of a Coupled, Global and Cloud-Resolving Modeling System
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Chern, J.; Atlas, R.
2005-01-01
Recently Grabowski (2001) and Khairoutdinov and Randall (2001) have proposed the use of 2D CFWs as a "super parameterization" [or multi-scale modeling framework (MMF)] to represent cloud processes within atmospheric general circulation models (GCMs). In the MMF, a fine-resolution 2D CRM takes the place of the single-column parameterization used in conventional GCMs. A prototype Goddard MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM) is now being developed. The prototype includes the fvGCM run at 2.50 x 20 horizontal resolution with 32 vertical layers from the surface to 1 mb and the 2D (x-z) GCE using 64 horizontal and 32 vertical grid points with 4 km horizontal resolution and a cyclic lateral boundary. The time step for the 2D GCE would be 15 seconds, and the fvGCM-GCE coupling frequency would be 30 minutes (i.e. the fvGCM physical time step). We have successfully developed an fvGCM-GCE coupler for this prototype. Because the vertical coordinate of the fvGCM (a terrain-following floating Lagrangian coordinate) is different from that of the GCE (a z coordinate), vertical interpolations between the two coordinates are needed in the coupler. In interpolating fields from the GCE to fvGCM, we use an existing fvGCM finite- volume piecewise parabolic mapping (PPM) algorithm, which conserves the mass, momentum, and total energy. A new finite-volume PPM algorithm, which conserves the mass, momentum and moist static energy in the z coordinate, is being developed for interpolating fields from the fvGCM to the GCE. In the meeting, we will discuss the major differences between the two MMFs (i.e., the CSU MMF and the Goddard MMF). We will also present performance and critical issues related to the MMFs. In addition, we will present multi-dimensional cloud datasets (i.e., a cloud data library) generated by the Goddard MMF that will be provided to the global modeling community to help improve the representation and performance of moist processes in climate models and to improve our understanding of cloud processes globally (the software tools needed to produce cloud statistics and to identify various types of clouds and cloud systems from both high-resolution satellite and model data will be also presented).
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Stevensson, Baltzar; Edén, Mattias
2011-03-28
We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.
The collaborative historical African rainfall model: description and evaluation
Funk, Christopher C.; Michaelsen, Joel C.; Verdin, James P.; Artan, Guleid A.; Husak, Gregory; Senay, Gabriel B.; Gadain, Hussein; Magadazire, Tamuka
2003-01-01
In Africa the variability of rainfall in space and time is high, and the general availability of historical gauge data is low. This makes many food security and hydrologic preparedness activities difficult. In order to help overcome this limitation, we have created the Collaborative Historical African Rainfall Model (CHARM). CHARM combines three sources of information: climatologically aided interpolated (CAI) rainfall grids (monthly/0.5° ), National Centers for Environmental Prediction reanalysis precipitation fields (daily/1.875° ) and orographic enhancement estimates (daily/0.1° ). The first set of weights scales the daily reanalysis precipitation fields to match the gridded CAI monthly rainfall time series. This produces data with a daily/0.5° resolution. A diagnostic model of orographic precipitation, VDELB—based on the dot-product of the surface wind V and terrain gradient (DEL) and atmospheric buoyancy B—is then used to estimate the precipitation enhancement produced by complex terrain. Although the data are produced on 0.1° grids to facilitate integration with satellite-based rainfall estimates, the ‘true’ resolution of the data will be less than this value, and varies with station density, topography, and precipitation dynamics. The CHARM is best suited, therefore, to applications that integrate rainfall or rainfall-driven model results over large regions. The CHARM time series is compared with three independent datasets: dekadal satellite-based rainfall estimates across the continent, dekadal interpolated gauge data in Mali, and daily interpolated gauge data in western Kenya. These comparisons suggest reasonable accuracies (standard errors of about half a standard deviation) when data are aggregated to regional scales, even at daily time steps. Thus constrained, numerical weather prediction precipitation fields do a reasonable job of representing large-scale diurnal variations.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1993-01-01
A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.
Gridded National Inventory of U.S. Methane Emissions
NASA Technical Reports Server (NTRS)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel;
2016-01-01
We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
Gridded national inventory of U.S. methane emissions
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...
2016-11-16
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
Gridded National Inventory of U.S. Methane Emissions.
Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L
2016-12-06
We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.
2016-01-01
Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.
Spatial interpolation techniques using R
Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...
Kriging - a challenge in geochemical mapping
NASA Astrophysics Data System (ADS)
Stojdl, Jiri; Matys Grygar, Tomas; Elznicova, Jitka; Popelka, Jan; Vachova, Tatina; Hosek, Michal
2017-04-01
Geochemists can easily provide datasets for contamination mapping thanks to recent advances in geographical information systems (GIS) and portable chemical-analytical instrumentation. Kriging is commonly used to visualise the results of such mapping. It is understandable, as kriging is a well-established method of spatial interpolation. It was created in 1950's for geochemical data processing to estimate the most likely distribution of gold based on samples from a few boreholes. However, kriging is based on the assumption of continuous spatial distribution of numeric data that is not realistic in environmental geochemistry. The use of kriging is correct when the data density is sufficient with respect to heterogeneity of the spatial distribution of the geochemical parameters. However, if anomalous geochemical values are focused in hotspots of which boundaries are insufficiently densely sampled, kriging could provide misleading maps with the real contours of hotspots blurred by data smoothing and levelling out individual (isolated) but relevant anomalous values. The data smoothing can thus it results in underestimation of geochemical extremes, which may in fact be of the greatest importance in mapping projects. In our study we characterised hotspots of contamination by uranium and zinc in the floodplain of the Ploučnice River. The first objective of our study was to compare three methods of sampling: random (based on stochastic generation of sampling points), systematic (square grid) and judgemental sampling (based on judgement stemming from principles of fluvial deposition) as the basis for pollution maps. The first detected problem in production of the maps was the reduction of the smoothing effect of kriging using appropriate function of empirical semivariogram and setting the variation of at microscales smaller than the sampling distances to minimum (the "nugget" parameter of semivariogram). Exact interpolators such as Inverse Distance Weighting (IDW) or Radial Basis Functions (RBF) provides better solutions in this respect. The second detected problem was heterogeneous structure of the floodplain: it consists of distinct sedimentary bodies (e.g., natural levees, meander scars, point bars), which have been formed by different process (erosion or deposition on flooding, channel shifts by meandering, channel abandonment). Interpolation through these sedimentary bodies has thus not much sense. Solution is to identify boundaries between sedimentary bodies and interpolation of data with this additional information using exact interpolators with barriers (IDW, RBF or stratified kriging) or regression kriging. Those boundaries can be identified using, e.g., digital elevation model (DEM), dipole electromagnetic profiling (DEMP), gamma spectrometry, or an expertise by a geomorphologist.
Hou, Deyi; O'Connor, David; Nathanail, Paul; Tian, Li; Ma, Yan
2017-12-01
Heavy metal soil contamination is associated with potential toxicity to humans or ecotoxicity. Scholars have increasingly used a combination of geographical information science (GIS) with geostatistical and multivariate statistical analysis techniques to examine the spatial distribution of heavy metals in soils at a regional scale. A review of such studies showed that most soil sampling programs were based on grid patterns and composite sampling methodologies. Many programs intended to characterize various soil types and land use types. The most often used sampling depth intervals were 0-0.10 m, or 0-0.20 m, below surface; and the sampling densities used ranged from 0.0004 to 6.1 samples per km 2 , with a median of 0.4 samples per km 2 . The most widely used spatial interpolators were inverse distance weighted interpolation and ordinary kriging; and the most often used multivariate statistical analysis techniques were principal component analysis and cluster analysis. The review also identified several determining and correlating factors in heavy metal distribution in soils, including soil type, soil pH, soil organic matter, land use type, Fe, Al, and heavy metal concentrations. The major natural and anthropogenic sources of heavy metals were found to derive from lithogenic origin, roadway and transportation, atmospheric deposition, wastewater and runoff from industrial and mining facilities, fertilizer application, livestock manure, and sewage sludge. This review argues that the full potential of integrated GIS and multivariate statistical analysis for assessing heavy metal distribution in soils on a regional scale has not yet been fully realized. It is proposed that future research be conducted to map multivariate results in GIS to pinpoint specific anthropogenic sources, to analyze temporal trends in addition to spatial patterns, to optimize modeling parameters, and to expand the use of different multivariate analysis tools beyond principal component analysis (PCA) and cluster analysis (CA). Copyright © 2017 Elsevier Ltd. All rights reserved.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
NASA Astrophysics Data System (ADS)
Kyselý, Jan; Plavcová, Eva
2010-12-01
The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L
The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, andmore » a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.« less
A rational interpolation method to compute frequency response
NASA Technical Reports Server (NTRS)
Kenney, Charles; Stubberud, Stephen; Laub, Alan J.
1993-01-01
A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.
Construction of a 3-arcsecond digital elevation model for the Gulf of Maine
Twomey, Erin R.; Signell, Richard P.
2013-01-01
A system-wide description of the seafloor topography is a basic requirement for most coastal oceanographic studies. The necessary detail of the topography obviously varies with application, but for many uses, a nominal resolution of roughly 100 m is sufficient. Creating a digital bathymetric grid with this level of resolution can be a complex procedure due to a multiplicity of data sources, data coverages, datums and interpolation procedures. This report documents the procedures used to construct a 3-arcsecond (approximately 90-meter grid cell size) digital elevation model for the Gulf of Maine (71°30' to 63° W, 39°30' to 46° N). We obtained elevation and bathymetric data from a variety of American and Canadian sources, converted all data to the North American Datum of 1983 for horizontal coordinates and the North American Vertical Datum of 1988 for vertical coordinates, used a combination of automatic and manual techniques for quality control, and interpolated gaps using a surface-fitting routine.
Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2003-01-01
A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.
A grid-based tropospheric product for China using a GNSS network
NASA Astrophysics Data System (ADS)
Zhang, Hongxing; Yuan, Yunbin; Li, Wei; Zhang, Baocheng; Ou, Jikun
2017-11-01
Tropospheric delay accounts for one source of error in global navigation satellite systems (GNSS). To better characterize the tropospheric delays in the temporal and spatial domain and facilitate the safety-critical use of GNSS across China, a method is proposed to generate a grid-based tropospheric product (GTP) using the GNSS network with an empirical tropospheric model, known as IGGtrop. The prototype system generates the GTPs in post-processing and real-time modes and is based on the undifferenced and uncombined precise point positioning (UU-PPP) technique. GTPs are constructed for a grid form (2.0{°}× 2.5{°} latitude-longitude) over China with a time resolution of 5 min. The real-time GTP messages are encoded in a self-defined RTCM3 format and broadcast to users using NTRIP (networked transport of RTCM via internet protocol), which enables efficient and safe transmission to real-time users. Our approach for GTP generation consists of three sequential steps. In the first step, GNSS-derived zenith tropospheric delays (ZTDs) for a network of GNSS stations are estimated using UU-PPP. In the second step, vertical adjustments for the GNSS-derived ZTDs are applied to address the height differences between the GNSS stations and grid points. The ZTD height corrections are provided by the IGGtrop model. Finally, an inverse distance weighting method is used to interpolate the GNSS-derived ZTDs from the surrounding GNSS stations to the location of the grid point. A total of 210 global positioning system (GPS) stations from the crustal movement observation network of China are used to generate the GTPs in both post-processing and real-time modes. The accuracies of the GTPs are assessed against with ERA-Interim-derived ZTDs and the GPS-derived ZTDs at 12 test GPS stations, respectively. The results show that the post-processing and real-time GTPs can provide the ZTDs with accuracies of 1.4 and 1.8 cm, respectively. We also apply the GTPs in real-time kinematic GPS PPP, and the results show that the convergence time of the PPP solutions is shortened. These results confirm that the GTPs can act as an efficient information source to augment GNSS positioning over China.
Pricing and simulation for real estate index options: Radial basis point interpolation
NASA Astrophysics Data System (ADS)
Gong, Pu; Zou, Dong; Wang, Jiayue
2018-06-01
This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.
NASA Astrophysics Data System (ADS)
Machiwal, Deepesh; Gupta, Ankit; Jha, Madan Kumar; Kamble, Trupti
2018-04-01
This study investigated trends in 35 years (1979-2013) temperature (maximum, Tmax and minimum, Tmin) and rainfall at annual and seasonal (pre-monsoon, monsoon, post-monsoon, and winter) scales for 31 grid points in a coastal arid region of India. Box-whisker plots of annual temperature and rainfall time series depict systematic spatial gradients. Trends were examined by applying eight tests, such as Kendall rank correlation (KRC), Spearman rank order correlation (SROC), Mann-Kendall (MK), four modified MK tests, and innovative trend analysis (ITA). Trend magnitudes were quantified by Sen's slope estimator, and a new method was adopted to assess the significance of linear trends in MK-test statistics. It was found that the significant serial correlation is prominent in the annual and post-monsoon Tmax and Tmin, and pre-monsoon Tmin. The KRC and MK tests yielded similar results in close resemblance with the SROC test. The performance of two modified MK tests considering variance-correction approaches was found superior to the KRC, MK, modified MK with pre-whitening, and ITA tests. The performance of original MK test is poor due to the presence of serial correlation, whereas the ITA method is over-sensitive in identifying trends. Significantly increasing trends are more prominent in Tmin than Tmax. Further, both the annual and monsoon rainfall time series have a significantly increasing trend of 9 mm year-1. The sequential significance of linear trend in MK test-statistics is very strong (R 2 ≥ 0.90) in the annual and pre-monsoon Tmin (90% grid points), and strong (R 2 ≥ 0.75) in monsoon Tmax (68% grid points), monsoon, post-monsoon, and winter Tmin (respectively 65, 55, and 48% grid points), as well as in the annual and monsoon rainfalls (respectively 68 and 61% grid points). Finally, this study recommends use of variance-corrected MK test for the precise identification of trends. It is emphasized that the rising Tmax may hamper crop growth due to enhanced metabolic-activities and shortened crop-duration. Likewise, increased Tmin may result in lesser crop and biomass yields owing to the increased respiration.
Estimating monthly temperature using point based interpolation techniques
NASA Astrophysics Data System (ADS)
Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi
2013-04-01
This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.
Groundwater contaminant plume maps and volumes, 100-K and 100-N Areas, Hanford Site, Washington
Johnson, Kenneth H.
2016-09-27
This study provides an independent estimate of the areal and volumetric extent of groundwater contaminant plumes which are affected by waste disposal in the 100-K and 100-N Areas (study area) along the Columbia River Corridor of the Hanford Site. The Hanford Natural Resource Trustee Council requested that the U.S. Geological Survey perform this interpolation to assess the accuracy of delineations previously conducted by the U.S. Department of Energy and its contractors, in order to assure that the Natural Resource Damage Assessment could rely on these analyses. This study is based on previously existing chemical (or radionuclide) sampling and analysis data downloaded from publicly available Hanford Site Internet sources, geostatistically selected and interpreted as representative of current (from 2009 through part of 2012) but average conditions for groundwater contamination in the study area. The study is limited in scope to five contaminants—hexavalent chromium, tritium, nitrate, strontium-90, and carbon-14, all detected at concentrations greater than regulatory limits in the past.All recent analytical concentrations (or activities) for each contaminant, adjusted for radioactive decay, non-detections, and co-located wells, were converted to log-normal distributions and these transformed values were averaged for each well location. The log-normally linearized well averages were spatially interpolated on a 50 × 50-meter (m) grid extending across the combined 100-N and 100-K Areas study area but limited to avoid unrepresentative extrapolation, using the minimum curvature geostatistical interpolation method provided by SURFER®data analysis software. Plume extents were interpreted by interpolating the log-normally transformed data, again using SURFER®, along lines of equal contaminant concentration at an appropriate established regulatory concentration . Total areas for each plume were calculated as an indicator of relative environmental damage. These plume extents are shown graphically and in tabular form for comparison to previous estimates. Plume data also were interpolated to a finer grid (10 × 10 m) for some processing, particularly to estimate volumes of contaminated groundwater. However, hydrogeologic transport modeling was not considered for the interpolation. The compilation of plume extents for each contaminant also allowed estimates of overlap of the plumes or areas with more than one contaminant above regulatory standards.A mapping of saturated aquifer thickness also was derived across the 100-K and 100–N study area, based on the vertical difference between the groundwater level (water table) at the top and the altitude of the top of the Ringold Upper Mud geologic unit, considered the bottom of the uppermost unconfined aquifer. Saturated thickness was calculated for each cell in the finer (10 × 10 m) grid. The summation of the cells’ saturated thickness values within each polygon of plume regulatory exceedance provided an estimate of the total volume of contaminated aquifer, and the results also were checked using a SURFER® volumetric integration procedure. The total volume of contaminated groundwater in each plume was derived by multiplying the aquifer saturated thickness volume by a locally representative value of porosity (0.3).Estimates of the uncertainty of the plume delineation also are presented. “Upper limit” plume delineations were calculated for each contaminant using the same procedure as the “average” plume extent except with values at each well that are set at a 95-percent upper confidence limit around the log-normally transformed mean concentrations, based on the standard error for the distribution of the mean value in that well; “lower limit” plumes are calculated at a 5-percent confidence limit around the geometric mean. These upper- and lower-limit estimates are considered unrealistic because the statistics were increased or decreased at each well simultaneously and were not adjusted for correlation among the well distributions (i.e., it is not realistic that all wells would be high simultaneously). Sources of the variability in the distributions used in the upper- and lower-extent maps include time varying concentrations and analytical errors.The plume delineations developed in this study are similar to the previous plume descriptions developed by U.S. Department of Energy and its contractors. The differences are primarily due to data selection and interpolation methodology. The differences in delineated plumes are not sufficient to result in the Hanford Natural Resource Trustee Council adjusting its understandings of contaminant impact or remediation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Spatial Scale Variability of NH3 and Impacts to interpolated Concentration Grids
Over the past decade, reduced nitrogen (NH3, NH4) has become an important component of atmospheric nitrogen deposition due to increases in agricultural activities and reductions in oxidized sulfur and nitrogen emissions from the power sector and mobile sources. Reduced nitrogen i...
An automated system to simulate the River discharge in Kyushu Island using the H08 model
NASA Astrophysics Data System (ADS)
Maji, A.; Jeon, J.; Seto, S.
2015-12-01
Kyushu Island is located in southwestern part of Japan, and it is often affected by typhoons and a Baiu front. There have been severe water-related disasters recorded in Kyushu Island. On the other hand, because of high population density and for crop growth, water resource is an important issue of Kyushu Island.The simulation of river discharge is important for water resource management and early warning of water-related disasters. This study attempts to apply H08 model to simulate river discharge in Kyushu Island. Geospatial meteorological and topographical data were obtained from Japanese Ministry of Land, Infrastructure, Transport and Tourism (MLIT) and Automated Meteorological Data Acquisition System (AMeDAS) of Japan Meteorological Agency (JMA). The number of the observation stations of AMeDAS is limited and is not quite satisfactory for the application of water resources models in Kyushu. It is necessary to spatially interpolate the point data to produce grid dataset. Meteorological grid dataset is produced by considering elevation dependence. Solar radiation is estimated from hourly sunshine duration by a conventional formula. We successfully improved the accuracy of interpolated data just by considering elevation dependence and found out that the bias is related to geographical location. The rain/snow classification is done by H08 model and is validated by comparing estimated and observed snow rate. The estimates tend to be larger than the corresponding observed values. A system to automatically produce daily meteorological grid dataset is being constructed.The geospatial river network data were produced by ArcGIS and they were utilized in the H08 model to simulate the river discharge. Firstly, this research is to compare simulated and measured specific discharge, which is the ratio of discharge to watershed area. Significant error between simulated and measured data were seen in some rivers. Secondly, the outputs by the coupled model including crop growth module and reservoir operation module were analyzed. However, there are differences between the simulated value and measured value. We need to improve dam operation, artificial water intake, and parameters such as depth of the soil and flow velocity in the river.
Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Gu, L.
2016-12-01
Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.
Hybrid Data Assimilation without Ensemble Filtering
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Akkraoui, Amal El
2014-01-01
The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.
Stellar Atmospheric Modelling for the ACCESS Program
NASA Astrophysics Data System (ADS)
Morris, Matthew; Kaiser, Mary Elizabeth; Bohlin, Ralph; Kurucz, Robert; ACCESS Team
2018-01-01
A goal of the ACCESS program (Absolute Color Calibration Experiment for Standard Stars) is to enable greater discrimination between theoretical astrophysical models and observations, where the comparison is limited by systematic errors associated with the relative flux calibration of the targets. To achieve these goals, ACCESS has been designed as a sub-orbital rocket borne payload and ground calibration program, to establish absolute flux calibration of stellar targets at <1 % precision, with a resolving power of 500 across the 0.35 to 1.7 micron bandpass.In order to obtain higher resolution spectroscopy in the optical and near-infrared range than either the ACCESS payload or CALSPEC observations provide, the ACCESS team has conducted a multi-instrument observing program at Apache Point Observatory. Using these calibrated high resolution spectra in addition to the HST/CALSPEC data, we have generated stellar atmosphere models for ACCESS flight candidates, as well as a selection of A and G stars from the CALSPEC database. Stellar atmosphere models were generated using Atlas 9 and Atlas 12 Kurucz stellar atmosphere software. The effective temperature, log(g), metallicity, and redenning were varied and the chi-squared statistic was minimized to obtain a best-fit model. A comparison of these models and the results from interpolation between grids of existing models will be presented. The impact of the flexibility of the Atlas 12 input parameters (e.g. solar metallicity fraction, abundances, microturbulent velocity) is being explored.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis
NASA Astrophysics Data System (ADS)
Kurtulus, Bedri; Flipo, Nicolas
2012-01-01
The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.
Distributed snow modeling suitable for use with operational data for the American River watershed.
NASA Astrophysics Data System (ADS)
Shamir, E.; Georgakakos, K. P.
2004-12-01
The mountainous terrain of the American River watershed (~4300 km2) at the Western slope of the Northern Sierra Nevada is subject to significant variability in the atmospheric forcing that controls the snow accumulation and ablations processes (i.e., precipitation, surface temperature, and radiation). For a hydrologic model that attempts to predict both short- and long-term streamflow discharges, a plausible description of the seasonal and intermittent winter snow pack accumulation and ablation is crucial. At present the NWS-CNRFC operational snow model is implemented in a semi distributed manner (modeling unit of about 100-1000 km2) and therefore lump distinct spatial variability of snow processes. In this study we attempt to account for the precipitation, temperature, and radiation spatial variability by constructing a distributed snow accumulation and melting model suitable for use with commonly available sparse data. An adaptation of the NWS-Snow17 energy and mass balance that is used operationally at the NWS River Forecast Centers is implemented at 1 km2 grid cells with distributed input and model parameters. The input to the model (i.e., precipitation and surface temperature) is interpolated from observed point data. The surface temperature was interpolated over the basin based on adiabatic lapse rates using topographic information whereas the precipitation was interpolated based on maps of climatic mean annual rainfall distribution acquired from PRISM. The model parameters that control the melting rate due to radiation were interpolated based on aspect. The study was conducted for the entire American basin for the snow seasons of 1999-2000. Validation of the Snow Water Equivalent (SWE) prediction is done by comparing to observation from 12 snow Sensors. The Snow Cover Area (SCA) prediction was evaluated by comparing to remotely sensed 500m daily snow cover derived from MODIS. The results that the distribution of snow over the area is well captured and the quantity compared to the snow gauges are well estimated in the high elevation.
Surface electric fields for North America during historical geomagnetic storms
Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.
2013-01-01
To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.
Multiscale image processing and antiscatter grids in digital radiography.
Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D
2009-01-01
Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.
Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods
NASA Astrophysics Data System (ADS)
Pervez, M.; Henebry, G. M.
2010-12-01
In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.
A simple algorithm to improve the performance of the WENO scheme on non-uniform grids
NASA Astrophysics Data System (ADS)
Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong
2018-02-01
This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.
A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)
NASA Astrophysics Data System (ADS)
Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni
2016-11-01
We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.
TopoSCALE v.1.0: downscaling gridded climate data in complex terrain
NASA Astrophysics Data System (ADS)
Fiddes, J.; Gruber, S.
2014-02-01
Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).
On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians
NASA Astrophysics Data System (ADS)
Valverde, Clodoaldo; Baseia, Basílio
2018-01-01
We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.
High-temperature behavior of a deformed Fermi gas obeying interpolating statistics.
Algin, Abdullah; Senay, Mustafa
2012-04-01
An outstanding idea originally introduced by Greenberg is to investigate whether there is equivalence between intermediate statistics, which may be different from anyonic statistics, and q-deformed particle algebra. Also, a model to be studied for addressing such an idea could possibly provide us some new consequences about the interactions of particles as well as their internal structures. Motivated mainly by this idea, in this work, we consider a q-deformed Fermi gas model whose statistical properties enable us to effectively study interpolating statistics. Starting with a generalized Fermi-Dirac distribution function, we derive several thermostatistical functions of a gas of these deformed fermions in the thermodynamical limit. We study the high-temperature behavior of the system by analyzing the effects of q deformation on the most important thermostatistical characteristics of the system such as the entropy, specific heat, and equation of state. It is shown that such a deformed fermion model in two and three spatial dimensions exhibits the interpolating statistics in a specific interval of the model deformation parameter 0 < q < 1. In particular, for two and three spatial dimensions, it is found from the behavior of the third virial coefficient of the model that the deformation parameter q interpolates completely between attractive and repulsive systems, including the free boson and fermion cases. From the results obtained in this work, we conclude that such a model could provide much physical insight into some interacting theories of fermions, and could be useful to further study the particle systems with intermediate statistics.
Construction of Gridded Daily Weather Data and its Use in Central-European Agroclimatic Study
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Trnka, M.; Skalak, P.
2013-12-01
The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series, interpolated, and then modified according to the GCM- or RCM-based climate change scenarios. The present contribution, in which the parametric daily weather generator M&Rfi is linked to the high-resolution RCM output (ALADIN-Climate/CZ model) and GCM-based climate change scenarios, consists of two parts: The first part focuses on a methodology. Firstly, the gridded WG representing the baseline climate is created by merging information from observations and high resolution RCM outputs. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with RCM-simulated weather series vs. spatially scarcer observations. To represent the future climate, the WG parameters are modified according to the 'WG-friendly' climate change scenarios. These scenarios are defined in terms of changes in WG parameters and include - apart from changes in the means - changes in WG parameters, which represent the additional characteristics of the weather series (e.g. probability of wet day occurrence and lag-1 autocorrelation of daily mean temperature). The WG-friendly scenarios for the present experiment are based on comparison of future vs baseline surface weather series simulated by GCMs from a CMIP3 database. The second part will present results of climate change impact study based on an above methodology applied to Central Europe. The changes in selected climatic (focusing on the extreme precipitation and temperature characteristics) and agroclimatic (including number of days during vegetation season with heat and drought stresses) characteristics will be analysed. In discussing the results, the emphasis will be put on 'added value' of various aspects of above methodology (e.g. inclusion of changes in 'advanced' WG parameters into the climate change scenarios). Acknowledgements: The present experiment is made within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), and VALUE (COST ES 1102 action).
NASA Astrophysics Data System (ADS)
Jin, G.
2012-12-01
Multiphase flow modeling is an important numerical tool for a better understanding of transport processes in the fields including, but not limited to, petroleum reservoir engineering, remedy of ground water contamination, and risk evaluation of greenhouse gases such as CO2 injected into deep saline reservoirs. However, accurate numerical modeling for multiphase flow remains many challenges that arise from the inherent tight coupling and strong non-linear nature of the governing equations and the highly heterogeneous media. The existence of counter current flow which is caused by the effect of adverse relative mobility contrast and gravitational and capillary forces will introduce additional numerical instability. Recently multipoint flux approximation (MPFA) has become a subject of extensive research and has been demonstrated with great success in reducing considerable grid orientation effects compared to the conventional single point upstream (SPU) weighting scheme, especially in higher dimensions. However, the present available MPFA schemes are mathematically targeted to certain types of grids in two dimensions, a more general form of MPFA scheme is needed for both 2-D and 3-D problems. In this work a new upstream weighting scheme based on multipoint directional incoming fluxes is proposed which incorporates full permeability tensor to account for the heterogeneity of the porous media. First, the multiphase governing equations are decoupled into an elliptic pressure equation and a hyperbolic or parabolic saturation depends on whether the gravitational and capillary pressures are presented or not. Next, a dual secondary grid (called finite volume grid) is formulated from a primary grid (called finite element grid) to create interaction regions for each grid cell over the entire simulation domain. Such a discretization must ensure the conservation of mass and maintain the continuity of the Darcy velocity across the boundaries between neighboring interaction regions. The pressure field is then implicitly calculated from the pressure equation, which in turn results in the derived velocity field for directional flux calculation at each grid node. Directional flux at the center of each interaction surface is also calculated by interpolation from the element nodal fluxes using shape functions. The MPFA scheme is performed by a specific linear combination of all incoming fluxes into the upstream cell represented by either nodal fluxes or interpolated surface boundary fluxes to produce an upwind directional fluxed weighted relative mobility at the center of the interaction region boundary. Such an upwind weighted relative mobility is then used for calculating the saturations of each fluid phase explicitly. The proposed upwind weighting scheme has been implemented into a mixed finite element-finite volume (FE-FV) method, which allows for handling complex reservoir geometry with second-order accuracies in approximating primary variables. The numerical solver has been tested with several bench mark test problems. The application of the proposed scheme to migration path analysis of CO2 injected into deep saline reservoirs in 3-D has demonstrated its ability and robustness in handling multiphase flow with adverse mobility contrast in highly heterogeneous porous media.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.
Jia, Peng; Anderson, John D; Leitner, Michael; Rheingans, Richard
2016-01-01
Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of greater needs when resources are limited.
Tomography for two-dimensional gas temperature distribution based on TDLAS
NASA Astrophysics Data System (ADS)
Luo, Can; Wang, Yunchu; Xing, Fei
2018-03-01
Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.
Discrete Fourier transforms of nonuniformly spaced data
NASA Technical Reports Server (NTRS)
Swan, P. R.
1982-01-01
Time series or spatial series of measurements taken with nonuniform spacings have failed to yield fully to analysis using the Discrete Fourier Transform (DFT). This is due to the fact that the formal DFT is the convolution of the transform of the signal with the transform of the nonuniform spacings. Two original methods are presented for deconvolving such transforms for signals containing significant noise. The first method solves a set of linear equations relating the observed data to values defined at uniform grid points, and then obtains the desired transform as the DFT of the uniform interpolates. The second method solves a set of linear equations relating the real and imaginary components of the formal DFT directly to those of the desired transform. The results of numerical experiments with noisy data are presented in order to demonstrate the capabilities and limitations of the methods.
NASA Technical Reports Server (NTRS)
Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodriguez-Iturbe, I.
1986-01-01
The parameters of the conceptual model are evaluated from the analysis of eight years of summer rainstorm data from the dense raingage network in the Walnut Gulch catchment near Tucson, Arizona. The occurrence of measurable rain at any one of the 93 gages during a noon to noon day defined a storm. The total rainfall at each of the gages during a storm day constituted the data set for a single storm. The data are interpolated onto a fine grid and analyzed to obtain: an isohyetal plot at 2 mm intervals, the first three moments of point storm depth, the spatial correlation function, the spatial variance function, and the spatial distribution of the total storm depth. The description of the data analysis and the computer programs necessary to read the associated data tapes are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
Sound-field measurement with moving microphones
Katzberg, Fabrice; Mazur, Radoslaw; Maass, Marco; Koch, Philipp; Mertins, Alfred
2017-01-01
Closed-room scenarios are characterized by reverberation, which decreases the performance of applications such as hands-free teleconferencing and multichannel sound reproduction. However, exact knowledge of the sound field inside a volume of interest enables the compensation of room effects and allows for a performance improvement within a wide range of applications. The sampling of sound fields involves the measurement of spatially dependent room impulse responses, where the Nyquist-Shannon sampling theorem applies in the temporal and spatial domains. The spatial measurement often requires a huge number of sampling points and entails other difficulties, such as the need for exact calibration of a large number of microphones. In this paper, a method for measuring sound fields using moving microphones is presented. The number of microphones is customizable, allowing for a tradeoff between hardware effort and measurement time. The goal is to reconstruct room impulse responses on a regular grid from data acquired with microphones between grid positions, in general. For this, the sound field at equidistant positions is related to the measurements taken along the microphone trajectories via spatial interpolation. The benefits of using perfect sequences for excitation, a multigrid recovery, and the prospects for reconstruction by compressed sensing are presented. PMID:28599533
Geomagnetic cutoffs: A review for space dosimetry applications
NASA Astrophysics Data System (ADS)
Smart, D. F.; Shea, M. A.
1994-10-01
The earth's magnetic field acts as a shield against charged particle radiation from interplanetary space, technically described as the geomagnetic cutoff. The cutoff rigidity problem (except for the dipole special case) has 'no solution in closed form'. The dipole case yields the Stormer equation which has been repeatedly applied to the earth in hopes of providing useful approximations of cutoff rigidities. Unfortunately the earth's magnetic field has significant deviations from dipole geometry, and the Stormer cutoffs are not adequate for most applications. By application of massive digital computer power it is possible to determine realistic geomagnetic cutoffs derived from high order simulation of the geomagnetic field. Using this technique, 'world-grids' of directional cutoffs for the earth's surface and for a limited number of satellite altitudes have been derived. However, this approach is so expensive and time comsuming it is impractical for most spacecraft orbits, and approximations must be used. The world grids of cutoff rigidities are extensively used as lookup tables, normalization points and interpolation aids to estimate the effective geomagnetic cutoff rigidity of a specific location in space. We review the various options for estimating the cutoff rigidity for earth-orbiting satellites.
An optical systems analysis approach to image resampling
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
1997-01-01
All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.
Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling
NASA Astrophysics Data System (ADS)
Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.
2013-09-01
Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows that water barriers are better preserved with the new method. This research confirms the idea that topographical information, mainly the boundary locations and object classes, can enrich the height grid for this hydrological application.
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014.
Yanto; Livneh, Ben; Rajagopalan, Balaji
2017-05-23
We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.
Implementing Extreme Value Analysis in a Geospatial Workflow for Storm Surge Hazard Assessment
NASA Astrophysics Data System (ADS)
Catelli, J.; Nong, S.
2014-12-01
Gridded data of 100-yr (1%) and 500-yr (0.2%) storm surge flood elevations for the United States, Gulf of Mexico, and East Coast are critical to understanding this natural hazard. Storm surge heights were calculated across the study area utilizing SLOSH (Sea, Lake, and Overland Surges from Hurricanes) model data for thousands of synthetic US landfalling hurricanes. Based on the results derived from SLOSH, a series of interpolations were performed using spatial analysis in a geographic information system (GIS) at both the SLOSH basin and the synthetic event levels. The result was a single grid of maximum flood elevations for each synthetic event. This project addresses the need to utilize extreme value theory in a geospatial environment to analyze coincident cells across multiple synthetic events. The results are 100-yr (1%) and 500-yr (0.2%) values for each grid cell in the study area. This talk details a geospatial approach to move raster data to SciPy's NumPy Array structure using the Python programming language. The data are then connected through a Python library to an outside statistical package like R to fit cell values to extreme value theory distributions and return values for specified recurrence intervals. While this is not a new process, the value behind this work is the ability to keep this process in a single geospatial environment and be able to easily replicate this process for other natural hazard applications and extreme event modeling.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
Zhang, Jialin; Li, Xiuhong; Yang, Rongjin; Liu, Qiang; Zhao, Long; Dou, Baocheng
2017-01-01
In the practice of interpolating near-surface soil moisture measured by a wireless sensor network (WSN) grid, traditional Kriging methods with auxiliary variables, such as Co-kriging and Kriging with external drift (KED), cannot achieve satisfactory results because of the heterogeneity of soil moisture and its low correlation with the auxiliary variables. This study developed an Extended Kriging method to interpolate with the aid of remote sensing images. The underlying idea is to extend the traditional Kriging by introducing spectral variables, and operating on spatial and spectral combined space. The algorithm has been applied to WSN-measured soil moisture data in HiWATER campaign to generate daily maps from 10 June to 15 July 2012. For comparison, three traditional Kriging methods are applied: Ordinary Kriging (OK), which used WSN data only, Co-kriging and KED, both of which integrated remote sensing data as covariate. Visual inspections indicate that the result from Extended Kriging shows more spatial details than that of OK, Co-kriging, and KED. The Root Mean Square Error (RMSE) of Extended Kriging was found to be the smallest among the four interpolation results. This indicates that the proposed method has advantages in combining remote sensing information and ground measurements in soil moisture interpolation. PMID:28617351
The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis
NASA Astrophysics Data System (ADS)
Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.
2011-05-01
In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals
Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.
2016-01-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
Interpolation of the Extended Boolean Retrieval Model.
ERIC Educational Resources Information Center
Zanger, Daniel Z.
2002-01-01
Presents an interpolation theorem for an extended Boolean information retrieval model. Results show that whenever two or more documents are similarly ranked at any two points for a query containing exactly two terms, then they are similarly ranked at all points in between; and that results can fail for queries with more than two terms. (Author/LRW)
NASA Astrophysics Data System (ADS)
Yatagai, A. I.; Yasutomi, N.; Hamada, A.; Kamiguchi, K.; Arakawa, O.
2009-12-01
A daily gridded precipitation dataset for 1961-2007 is created by collecting rain gauge observation data across Asia through the activities of the Asian Precipitation--Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE) project. We have already released APHRODITE’s daily gridded precipitation (APHRO_V0902) product for 1961-2004 (Yatagai et al., 2009), and our number of valid stations was between 5000 and 12,000, representing 2.3 to 4.5 times the data available through the Global Telecommunication System network, which were used for most daily grid precipitation products. APHRO_V0902 is the only long-term (1961 onward) continental-scale daily product that contains a dense network of daily rain gauge data for Asia including the Himalayas and mountainous areas in the Middle East. The product has already contributed to studies such as the evaluation of Asian water resources, diagnosis of climate change, statistical downscaling, and verification of numerical model simulation and high-resolution precipitation estimates using satellites. We are currently improving quality control (QC) schemes and interpolation algorithms, and make continuous efforts in data collection. In addition, we have undertaken capacity building activities, such as training seminars by inviting researchers/programmers from some Asian meteorological organizations who provided the observation data for us. Furthermore, we feed the errata (QC) information back to the above organizations and/or data centers. The next version of the algorithm will be fixed in December 2009 (APHRO_V0912), and we will update the product up to 2007. Our progress and advantage of the next products will be shown at the AGU fall meeting in 2009.
Seismic Propagation in the Kuriles/Kamchatka Region
1980-07-25
model the final profile is well-represented by a spline interpolation. Figure 7 shows the sampling grid used to input velocity perturbations due to the...A modification of Cagniard’s method for s~ lving seismic pulse problems, Appl. Sci. Res. B., 8, p. 349, 1960. Fuchs, K. and G. Muller, Computation of
NASA Astrophysics Data System (ADS)
Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.
2015-12-01
The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.
SWAT use of gridded observations for simulating runoff - a Vietnam river basin study
NASA Astrophysics Data System (ADS)
Vu, M. T.; Raghavan, S. V.; Liong, S. Y.
2012-08-01
Many research studies that focus on basin hydrology have applied the SWAT model using station data to simulate runoff. But over regions lacking robust station data, there is a problem of applying the model to study the hydrological responses. For some countries and remote areas, the rainfall data availability might be a constraint due to many different reasons such as lacking of technology, war time and financial limitation that lead to difficulty in constructing the runoff data. To overcome such a limitation, this research study uses some of the available globally gridded high resolution precipitation datasets to simulate runoff. Five popular gridded observation precipitation datasets: (1) Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE), (2) Tropical Rainfall Measuring Mission (TRMM), (3) Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN), (4) Global Precipitation Climatology Project (GPCP), (5) a modified version of Global Historical Climatology Network (GHCN2) and one reanalysis dataset, National Centers for Environment Prediction/National Center for Atmospheric Research (NCEP/NCAR) are used to simulate runoff over the Dak Bla river (a small tributary of the Mekong River) in Vietnam. Wherever possible, available station data are also used for comparison. Bilinear interpolation of these gridded datasets is used to input the precipitation data at the closest grid points to the station locations. Sensitivity Analysis and Auto-calibration are performed for the SWAT model. The Nash-Sutcliffe Efficiency (NSE) and Coefficient of Determination (R2) indices are used to benchmark the model performance. Results indicate that the APHRODITE dataset performed very well on a daily scale simulation of discharge having a good NSE of 0.54 and R2 of 0.55, when compared to the discharge simulation using station data (0.68 and 0.71). The GPCP proved to be the next best dataset that was applied to the runoff modelling, with NSE and R2 of 0.46 and 0.51, respectively. The PERSIANN and TRMM rainfall data driven runoff did not show good agreement compared to the station data as both the NSE and R2 indices showed a low value of 0.3. GHCN2 and NCEP also did not show good correlations. The varied results by using these datasets indicate that although the gauge based and satellite-gauge merged products use some ground truth data, the different interpolation techniques and merging algorithms could also be a source of uncertainties. This entails a good understanding of the response of the hydrological model to different datasets and a quantification of the uncertainties in these datasets. Such a methodology is also useful for planning on Rainfall-runoff and even reservoir/river management both at rural and urban scales.
NASA Astrophysics Data System (ADS)
Rienecker, M. M.; Adamec, D.
1995-01-01
An ensemble of fraternal-twin experiments is used to assess the utility of optimal interpolation and model-based vertical empirical orthogonal functions (eofs) of streamfunction variability to assimilate satellite altimeter data into ocean models. Simulated altimeter data are assimilated into a basin-wide 3-layer quasi-geostrophic model with a horizontal grid spacing of 15 km. The effects of bottom topography are included and the model is forced by a wind stress curl distribution which is constant in time. The simulated data are extracted, along altimeter tracks with spatial and temporal characteristics of Geosat, from a reference model ocean with a slightly different climatology from that generated by the model used for assimilation. The use of vertical eofs determined from the model-generated streamfunction variability is shown to be effective in aiding the model's dynamical extrapolation of the surface information throughout the rest of the water column. After a single repeat cycle (17 days), the analysis errors are reduced markedly from the initial level, by 52% in the surface layer, 41% in the second layer and 11% in the bottom layer. The largest differences between the assimilation analysis and the reference ocean are found in the nonlinear regime of the mid-latitude jet in all layers. After 100 days of assimilation, the error in the upper two layers has been reduced by over 50% and that in the bottom layer by 38%. The essence of the method is that the eofs capture the statistics of the dynamical balances in the model and ensure that this balance is not inappropriately disturbed during the assimilation process. This statistical balance includes any potential vorticity homogeneity which may be associated with the eddy stirring by mid-latitude surface jets.
Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events
NASA Astrophysics Data System (ADS)
McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.
2015-12-01
Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.
Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods
NASA Technical Reports Server (NTRS)
Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.;
2016-01-01
The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.
Similar estimates of temperature impacts on global wheat yield by three independent methods
NASA Astrophysics Data System (ADS)
Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan
2016-12-01
The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.
NASA Astrophysics Data System (ADS)
Barfod, Adrian A. S.; Møller, Ingelise; Christiansen, Anders V.
2016-11-01
We present a large-scale study of the petrophysical relationship of resistivities obtained from densely sampled ground-based and airborne transient electromagnetic surveys and lithological information from boreholes. The overriding aim of this study is to develop a framework for examining the resistivity-lithology relationship in a statistical manner and apply this framework to gain a better description of the large-scale resistivity structures of the subsurface. In Denmark very large and extensive datasets are available through the national geophysical and borehole databases, GERDA and JUPITER respectively. In a 10 by 10 km grid, these data are compiled into histograms of resistivity versus lithology. To do this, the geophysical data are interpolated to the position of the boreholes, which allows for a lithological categorization of the interpolated resistivity values, yielding different histograms for a set of desired lithological categories. By applying the proposed algorithm to all available boreholes and airborne and ground-based transient electromagnetic data we build nation-wide maps of the resistivity-lithology relationships in Denmark. The presented Resistivity Atlas reveals varying patterns in the large-scale resistivity-lithology relations, reflecting geological details such as available source material for tills. The resistivity maps also reveal a clear ambiguity in the resistivity values for different lithologies. The Resistivity Atlas is highly useful when geophysical data are to be used for geological or hydrological modeling.
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
NASA Astrophysics Data System (ADS)
Jones, M.; Longenecker, H. E., III
2017-12-01
The 2017 hurricane season brought the unprecedented landfall of three Category 4 hurricanes (Harvey, Irma and Maria). FEMA is responsible for coordinating the federal response and recovery efforts for large disasters such as these. FEMA depends on timely and accurate depth grids to estimate hazard exposure, model damage assessments, plan flight paths for imagery acquisition, and prioritize response efforts. In order to produce riverine or coastal depth grids based on observed flooding, the methodology requires peak crest water levels at stream gauges, tide gauges, high water marks, and best-available elevation data. Because peak crest data isn't available until the apex of a flooding event and high water marks may take up to several weeks for field teams to collect for a large-scale flooding event, final observed depth grids are not available to FEMA until several days after a flood has begun to subside. Within the last decade NOAA's National Weather Service (NWS) has implemented the Advanced Hydrologic Prediction Service (AHPS), a web-based suite of accurate forecast products that provide hydrograph forecasts at over 3,500 stream gauge locations across the United States. These forecasts have been newly implemented into an automated depth grid script tool, using predicted instead of observed water levels, allowing FEMA access to flood hazard information up to 3 days prior to a flooding event. Water depths are calculated from the AHPS predicted flood stages and are interpolated at 100m spacing along NHD hydrolines within the basin of interest. A water surface elevation raster is generated from these water depths using an Inverse Distance Weighted interpolation. Then, elevation (USGS NED 30m) is subtracted from the water surface elevation raster so that the remaining values represent the depth of predicted flooding above the ground surface. This automated process requires minimal user input and produced forecasted depth grids that were comparable to post-event observed depth grids and remote sensing-derived flood extents for the 2017 hurricane season. These newly available forecasted models were used for pre-event response planning and early estimated hazard exposure counts, allowing FEMA to plan for and stand up operations several days sooner than previously possible.
Ellis, Robert J; Zhu, Bilei; Koenig, Julian; Thayer, Julian F; Wang, Ye
2015-09-01
As the literature on heart rate variability (HRV) continues to burgeon, so too do the challenges faced with comparing results across studies conducted under different recording conditions and analysis options. Two important methodological considerations are (1) what sampling frequency (SF) to use when digitizing the electrocardiogram (ECG), and (2) whether to interpolate an ECG to enhance the accuracy of R-peak detection. Although specific recommendations have been offered on both points, the evidence used to support them can be seen to possess a number of methodological limitations. The present study takes a new and careful look at how SF influences 24 widely used time- and frequency-domain measures of HRV through the use of a Monte Carlo-based analysis of false positive rates (FPRs) associated with two-sample tests on independent sets of healthy subjects. HRV values from the first sample were calculated at 1000 Hz, and HRV values from the second sample were calculated at progressively lower SFs (and either with or without R-peak interpolation). When R-peak interpolation was applied prior to HRV calculation, FPRs for all HRV measures remained very close to 0.05 (i.e. the theoretically expected value), even when the second sample had an SF well below 100 Hz. Without R-peak interpolation, all HRV measures held their expected FPR down to 125 Hz (and far lower, in the case of some measures). These results provide concrete insights into the statistical validity of comparing datasets obtained at (potentially) very different SFs; comparisons which are particularly relevant for the domains of meta-analysis and mobile health.
Comparison of global sst analyses for atmospheric data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phoebus, P.A.; Cummings, J.A.
1995-03-17
Traditionally, atmospheric models were executed using a climatological estimate of the sea surface temperature (SST) to define the marine boundary layer. More recently, particularly since the deployment of remote sensing instruments and the advent of multichannel SST observations atmospheric models have been improved by using more timely estimates of the actual state of the ocean. Typically, some type of objective analysis is performed using the data from satellites along with ship, buoy, and bathythermograph observations, and perhaps even climatology, to produce a weekly or daily analysis of global SST. Some of the earlier efforts to produce real-time global temperature analysesmore » have been described by Clancy and Pollak (1983) and Reynolds (1988). However, just as new techniques have been developed for atmospheric data assimilation, improvements have been made to ocean data assimilation systems as well. In 1988, the U.S. Navy`s Fleet Numerical Meteorology and Oceanography Center (FNMOC) implemented a global three-dimensional ocean temperature analysis that was based on the optimum interpolation methodology (Clancy et al., 1990). This system, the Optimum Thermal Interpolation System (OTIS 1.0), was initially distributed on a 2.50 resolution grid, and was later modified to generate fields on a 1.250 grid (OTIS 1.1; Clancy et al., 1992). Other optimum interpolation-based analyses (OTIS 3.0) were developed by FNMOC to perform high-resolution three-dimensional ocean thermal analyses in areas with strong frontal gradients and clearly defined water mass characteristics.« less
3D Thermal Stratification of Koycegiz Lake, Turkey.
NASA Astrophysics Data System (ADS)
Gurcan, Tugba; Kurtulus, Bedri; Avsar, Ozgur; Avsar, Ulas
2017-04-01
Water temperature in lakes, streams and coastal areas is an important indicator for several purposes (water quality, aquatic organism, land use, etc..). There are over a hundred lakes in Turkey. Most of them locates in the area known as the Lake District in southwestern Turkey. The Study area is located at the south and southwest part of Turkey in Muǧla region. The present study focuses on determining possible thermocline changes in Lake Koyceǧiz by in-situ measurements. The measurement were done by two snapshot campaign at July and August 2013. Using Mugla Sıtkı Kocman University geological engineering floating platform, temperature, specific conductance, salinity and depth values were measured with the YSI 6600 and Horiba U2 devices in surface and depth of Lake Köyceǧiz at specific grid. When the depth of the water and the coordinates were measured by GPS. Scattered data interpolation is used to perform interpolation on a scattered dataset that resides in 3D space. The 3D temperature color mesh grid were generated by using Delaunay triangulation and Natural neighbor interpolation methodology. At the end of the study a 3D conceptual lake temperature dynamics model was reconstructed using MATLAB functions. The results show that Koycegiz Lake is a meromictic lake and has a significance decrease of Temperature at 7m of depth.In this regard, we would like also to thank TUBITAK project (112Y137), French Embassy in Turkey and Sıtkı Kocman Foundation for their financial support.
NASA Astrophysics Data System (ADS)
Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.
2015-12-01
A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data layers based on specific conditions (e.g analyze flooding risk of a property based on topography, soil ability to hold water, and forecasted precipitation) or retrieve information about locations that share similar weather and vegetation patterns during extreme weather events like heat wave.
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
On the uncertainties associated with using gridded rainfall data as a proxy for observed
NASA Astrophysics Data System (ADS)
Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.
2011-09-01
Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods)? This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia (SA) initially using gridded data as the source of rainfall input and then gauged rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged or point data. Rather the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.
NASA Astrophysics Data System (ADS)
Woodrow, Kathryn; Lindsay, John B.; Berg, Aaron A.
2016-09-01
Although digital elevation models (DEMs) prove useful for a number of hydrological applications, they are often the end result of numerous processing steps that each contains uncertainty. These uncertainties have the potential to greatly influence DEM quality and to further propagate to DEM-derived attributes including derived surface and near-surface drainage patterns. This research examines the impacts of DEM grid resolution, elevation source data, and conditioning techniques on the spatial and statistical distribution of field-scale hydrological attributes for a 12,000 ha watershed of an agricultural area within southwestern Ontario, Canada. Three conditioning techniques, including depression filling (DF), depression breaching (DB), and stream burning (SB), were examined. The catchments draining to each boundary of 7933 agricultural fields were delineated using the surface drainage patterns modeled from LiDAR data, interpolated to a 1 m, 5 m, and 10 m resolution DEMs, and from a 10 m resolution photogrammetric DEM. The results showed that variation in DEM grid resolution resulted in significant differences in the spatial and statistical distributions of contributing areas and the distributions of downslope flowpath length. Degrading the grid resolution of the LiDAR data from 1 m to 10 m resulted in a disagreement in mapped contributing areas of between 29.4% and 37.3% of the study area, depending on the DEM conditioning technique. The disagreements among the field-scale contributing areas mapped from the 10 m LiDAR DEM and photogrammetric DEM were large, with nearly half of the study area draining to alternate field boundaries. Differences in derived contributing areas and flowpaths among various conditioning techniques increased substantially at finer grid resolutions, with the largest disagreement among mapped contributing areas occurring between the 1 m resolution DB DEM and the SB DEM (37% disagreement) and the DB-DF comparison (36.5% disagreement in mapped areas). These results demonstrate that the decision to use one DEM conditioning technique over another, and the constraints of available DEM data resolution and source, can greatly impact the modeled surface drainage patterns at the scale of individual fields. This work has significance for applications that attempt to optimize best-management practices (BMPs) for reducing soil erosion and runoff contamination within agricultural watersheds.
Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul
2013-01-01
Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.
Validation of China-wide interpolated daily climate variables from 1960 to 2011
NASA Astrophysics Data System (ADS)
Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang
2015-02-01
Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.
NASA Astrophysics Data System (ADS)
Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor
2018-03-01
As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1.0), interpolation between the base model grids is performed via the COSMO preprocessing tool INT2LM, which was implemented into the MMD submodel for online interpolation, specifically for mapping onto the rotated COSMO grid. A more flexible algorithm is required for the backward mapping. Thus, MMD (v2.0) uses the new MESSy submodel GRID for the generalised definition of arbitrary grids and for the transformation of data between them.In this article, we explain the basics of the MMD expansion and the newly developed generic MESSy submodel GRID (v1.0) and show some examples of the abovementioned applications.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, A.; Tzeferacos, P.; Zanni, C.
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less
A grid-embedding transonic flow analysis computer program for wing/nacelle configurations
NASA Technical Reports Server (NTRS)
Atta, E. H.; Vadyak, J.
1983-01-01
An efficient grid-interfacing zonal algorithm was developed for computing the three-dimensional transonic flow field about wing/nacelle configurations. the algorithm uses the full-potential formulation and the AF2 approximate factorization scheme. The flow field solution is computed using a component-adaptive grid approach in which separate grids are employed for the individual components in the multi-component configuration, where each component grid is optimized for a particular geometry such as the wing or nacelle. The wing and nacelle component grids are allowed to overlap, and flow field information is transmitted from one grid to another through the overlap region using trivariate interpolation. This report represents a discussion of the computational methods used to generate both the wing and nacelle component grids, the technique used to interface the component grids, and the method used to obtain the inviscid flow solution. Computed results and correlations with experiment are presented. also presented are discussions on the organization of the wing grid generation (GRGEN3) and nacelle grid generation (NGRIDA) computer programs, the grid interface (LK) computer program, and the wing/nacelle flow solution (TWN) computer program. Descriptions of the respective subroutines, definitions of the required input parameters, a discussion on interpretation of the output, and the sample cases illustrating application of the analysis are provided for each of the four computer programs.
Similar negative impacts of temperature on global wheat yield estimated by three independent methods
USDA-ARS?s Scientific Manuscript database
The potential impact of global temperature change on global wheat production has recently been assessed with different methods, scaling and aggregation approaches. Here we show that grid-based simulations, point-based simulations, and statistical regressions produce similar estimates of temperature ...
Maus, S.; Barckhausen, U.; Berkenbosch, H.; Bournas, N.; Brozena, J.; Childers, V.; Dostaler, F.; Fairhead, J.D.; Finn, C.; von Frese, R.R.B; Gaina, C.; Golynsky, S.; Kucks, R.; Lu, Hai; Milligan, P.; Mogren, S.; Muller, R.D.; Olesen, O.; Pilkington, M.; Saltus, R.; Schreckenberger, B.; Thebault, E.; Tontini, F.C.
2009-01-01
A global Earth Magnetic Anomaly Grid (EMAG2) has been compiled from satellite, ship, and airborne magnetic measurements. EMAG2 is a significant update of our previous candidate grid for the World Digital Magnetic Anomaly Map. The resolution has been improved from 3 arc min to 2 arc min, and the altitude has been reduced from 5 km to 4 km above the geoid. Additional grid and track line data have been included, both over land and the oceans. Wherever available, the original shipborne and airborne data were used instead of precompiled oceanic magnetic grids. Interpolation between sparse track lines in the oceans was improved by directional gridding and extrapolation, based on an oceanic crustal age model. The longest wavelengths (>330 km) were replaced with the latest CHAMP satellite magnetic field model MF6. EMAG2 is available at http://geomag.org/models/EMAG2 and for permanent archive at http://earthref.org/ cgi-bin/er.cgi?s=erda.cgi?n=970. ?? 2009 by the American Geophysical Union.
SteamTablesGrid: An ActiveX control for thermodynamic properties of pure water
NASA Astrophysics Data System (ADS)
Verma, Mahendra P.
2011-04-01
An ActiveX control, steam tables grid ( StmTblGrd) to speed up the calculation of the thermodynamic properties of pure water is developed. First, it creates a grid (matrix) for a specified range of temperature (e.g. 400-600 K with 40 segments) and pressure (e.g. 100,000-20,000,000 Pa with 40 segments). Using the ActiveX component SteamTables, the values of selected properties of water for each element (nodal point) of the 41×41 matrix are calculated. The created grid can be saved in a file for its reuse. A linear interpolation within an individual phase, vapor or liquid is implemented to calculate the properties at a given value of temperature and pressure. A demonstration program to illustrate the functionality of StmTblGrd is written in Visual Basic 6.0. Similarly, a methodology is presented to explain the use of StmTblGrd in MS-Excel 2007. In an Excel worksheet, the enthalpy of 1000 random datasets for temperature and pressure is calculated using StmTblGrd and SteamTables. The uncertainty in the enthalpy calculated with StmTblGrd is within ±0.03%. The calculations were performed on a personal computer that has a "Pentium(R) 4 CPU 3.2 GHz, RAM 1.0 GB" processor and Windows XP. The total execution time for the calculation with StmTblGrd was 0.3 s, while it was 60.0 s for SteamTables. Thus, the ActiveX control approach is reliable, accurate and efficient for the numerical simulation of complex systems that demand the thermodynamic properties of water at several values of temperature and pressure like steam flow in a geothermal pipeline network.
A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2016-04-01
Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of points (i) at a maximum distance (ii) around each core-point. Under this condition, seed points are said to be density-reachable by a core point delimiting a cluster around it. A chain of intermediate seed-points can connect contiguous clusters allowing clusters of arbitrary shape to be defined. The novelty of the proposed approach consists in the implementation of the DBSCAN 3D-module, where the xyz-coordinates identify each point and the density of points within a sphere is considered. This allows detecting volumetric features with a higher accuracy, depending only on actual sampling resolution. The approach is truly 3D and exploits all TLS measurements without the need of interpolation or data reduction. Using this method, enhanced geomorphological activity during the summer of 2015 in respect to the previous two years was observed. We attribute this result to the exceptionally high temperatures of that summer, which we deem responsible for accelerating the melting process at the rock glacier front and probably also increasing creep velocities. References: - Tonini, M. and Abellan, A. (2014). Rockfall detection from terrestrial LiDAR point clouds: A clustering approach using R. Journal of Spatial Information Sciences. Number 8, pp95-110 - Hennig, C. Package fpc: Flexible procedures for clustering. https://cran.r-project.org/web/packages/fpc/index.html, 2015. Accessed 2016-01-12.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1992-01-01
A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.
On removing interpolation and resampling artifacts in rigid image registration.
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce
2013-02-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.
On Removing Interpolation and Resampling Artifacts in Rigid Image Registration
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce
2013-01-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044
Spectral multigrid methods for elliptic equations 2
NASA Technical Reports Server (NTRS)
Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.
1983-01-01
A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang
2018-05-01
In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2008-01-01
This talk will review the status and progress of the NASA/Global Modeling and Assimilation Office (GMAO) atmospheric global reanalysis project called the Modern Era Retrospective-Analysis for Research and Applications (MERRA). An overview of NASA's emerging capabilities for assimilating a variety of other Earth Science observations of the land, ocean, and atmospheric constituents will also be presented. MERRA supports NASA Earth science by synthesizing the current suite of research satellite observations in a climate data context (covering the period 1979-present), and by providing the science and applications communities with of a broad range of weather and climate data with an emphasis on improved estimates of the hydrological cycle. MERRA is based on a major new version of the Goddard Earth Observing System Data Assimilation System (GEOS-5), that includes the Earth System Modeling Framework (ESMF)-based GEOS-5 atmospheric general circulation model and the new NOAA National Centers for Environmental Prediction (NCEP) unified grid-point statistical interpolation (GST) analysis scheme developed as a collaborative effort between NCEP and the GMAO. In addition to MERRA, the GMAO is developing new capabilities in aerosol and constituent assimilation, ocean, ocean biology, and land surface assimilation. This includes the development of an assimilation capability for tropospheric air quality monitoring and prediction, the development of a carbon-cycle modeling and assimilation system, and an ocean data assimilation system for use in coupled short-term climate forecasting.
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shippert, Tim; Gaustad, Krista
Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation
NASA Astrophysics Data System (ADS)
Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.
2016-12-01
Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.
Arc Jet Facility Test Condition Predictions Using the ADSI Code
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda
2015-01-01
The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.
Rtop - an R package for interpolation along the stream network
NASA Astrophysics Data System (ADS)
Skøien, J. O.
2009-04-01
Rtop - an R package for interpolation along the stream network Geostatistical methods have been used to a limited extent for estimation along stream networks, with a few exceptions(Gottschalk, 1993; Gottschalk, et al., 2006; Sauquet, et al., 2000; Skøien, et al., 2006). Interpolation of runoff characteristics are more complicated than the traditional random variables estimated by geostatistical methods, as the measurements have a more complicated support, and many catchments are nested. Skøien et al. (2006) presented the model Top-kriging which takes these effects into account for interpolation of stream flow characteristics (exemplified by the 100 year flood). The method has here been implemented as a package in the statistical environment R (R Development Core Team, 2004). Taking advantage of the existing methods in R for working with spatial objects, and the extensive possibilities for visualizing the result, this makes it considerably easier to apply the method on new data sets, in comparison to earlier implementation of the method. Gottschalk, L. 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., I. Krasovskaia, E. Leblois, and E. Sauquet. 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Development Core Team. 2004. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Sauquet, E., L. Gottschalk, and E. Leblois. 2000. Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J. O., R. Merz, and G. Blöschl. 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
Development of a gridded meteorological dataset over Java island, Indonesia 1985–2014
Yanto; Livneh, Ben; Rajagopalan, Balaji
2017-01-01
We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985–2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology. PMID:28534871
NASA Astrophysics Data System (ADS)
Contractor, S.; Donat, M.; Alexander, L. V.
2017-12-01
Reliable observations of precipitation are necessary to determine past changes in precipitation and validate models, allowing for reliable future projections. Existing gauge based gridded datasets of daily precipitation and satellite based observations contain artefacts and have a short length of record, making them unsuitable to analyse precipitation extremes. The largest limiting factor for the gauge based datasets is a dense and reliable station network. Currently, there are two major data archives of global in situ daily rainfall data, first is Global Historical Station Network (GHCN-Daily) hosted by National Oceanic and Atmospheric Administration (NOAA) and the other by Global Precipitation Climatology Centre (GPCC) part of the Deutsche Wetterdienst (DWD). We combine the two data archives and use automated quality control techniques to create a reliable long term network of raw station data, which we then interpolate using block kriging to create a global gridded dataset of daily precipitation going back to 1950. We compare our interpolated dataset with existing global gridded data of daily precipitation: NOAA Climate Prediction Centre (CPC) Global V1.0 and GPCC Full Data Daily Version 1.0, as well as various regional datasets. We find that our raw station density is much higher than other datasets. To avoid artefacts due to station network variability, we provide multiple versions of our dataset based on various completeness criteria, as well as provide the standard deviation, kriging error and number of stations for each grid cell and timestep to encourage responsible use of our dataset. Despite our efforts to increase the raw data density, the in situ station network remains sparse in India after the 1960s and in Africa throughout the timespan of the dataset. Our dataset would allow for more reliable global analyses of rainfall including its extremes and pave the way for better global precipitation observations with lower and more transparent uncertainties.
A refined age grid for the Central North Atlantic
NASA Astrophysics Data System (ADS)
Luis, J. M.; Miranda, J.
2012-12-01
We present a digital model for the age of the Central North Atlantic as a geographical grid with 1 arc minute resolution. Our seafloor isochrons are identified following the 'grid procedure' described in the work of Luis and Miranda (2008). The grid itself, which was initially a locally improved version of the Verhoef et al. (1996) compilation, was improved in 2011 (Luis and Miranda, 2011) and further refined with the inclusion of Russian data north of Charlie Gibbs FZ (personal communication, S. Mercuriev). The location and geometry of the Mid-Atlantic Ridge is now very well constrained by both magnetic anomalies and swath bathymetry data down to ~10 degrees N. We identified an extensive set of chrons 0, 2A, 3, 3A, 4, 4A, 5, 6, 6C, 11-12, 13, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 32, 33r, M0, M2, M4, M10, M16, M21 and M25. The ages at each grid node are computed by linear interpolation of adjacent isochrons along the direction of the flow-lines. As a pre-processing step each conjugate pair of isochrones was simplified by rotating one of them about the finite pole of that anomaly and use both, original picks plus rotated ones, to calculate an average segment. Fractures zones are used to constrain the chron's shape. These procedures minimize the uncertainties in locations where on one side of the basin one has good identifications but the other is poorly defined as is typical of many of the old isochrones. Care has also taken to account for locations where significant ridge jumps were found. Ages of the ocean floor between the oldest identified magnetic anomalies and continental crust are interpolated using the oldest ages of the Muller at al. (2008), which were themselves estimated from the ages of passive continental margin segments. This is a contribution to MAREKH project (PTDC/MAR/108142/2008) funded by the Portuguese Science Foundation.
Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions
NASA Astrophysics Data System (ADS)
Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.
2010-12-01
Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.
Stress direction history of the western United States and Mexico since 85 Ma
NASA Astrophysics Data System (ADS)
Bird, Peter
2002-06-01
A data set of 369 paleostress direction indicators (sets of dikes, veins, or fault slip vectors) is collected from previous compilations and the geologic literature. Like contemporary data, these stress directions show great variability, even over short distances. Therefore statistical methods are helpful in deciding which apparent variations in space or in time are significant. First, the interpolation technique of Bird and Li [1996] is used to interpolate stress directions to a grid of evenly spaced points in each of seventeen 5-m.y. time steps since 85 Ma. Then, a t test is used to search for stress direction changes between pairs of time windows whose sense can be determined with some minimum confidence. Available data cannot resolve local stress provinces, and only the broadest changes affecting country-sized regions are reasonably certain. During 85-50 Ma, the most compressive horizontal stress azimuth $\\hat \\sigma $1H was fairly constant at ~68° (United States) to 75° (Mexico). During 50-35 Ma, both counterclockwise stress changes (in the Pacific Northwest) and clockwise stress changes (from Nevada to New Mexico) are seen, but only locally and with about 50% confidence. A major stress azimuth change by ~90° occurred at 33 +/- 2 Ma in Mexico and at 30 +/- 2 Ma in the western United States. This was probably an interchange between $\\hat \\sigma $1 and $\\hat \\sigma $3 caused by a decrease in horizontal compression and/or an increase in vertical compression. The most likely cause was the rollback of horizontally subducting Farallon slab from under the southwestern United States and northwest Mexico, which was rapid during 35-25 Ma. After this transition, a clockwise rotation of principal stress axes by 36°-48° occurred more gradually since 22 Ma, affecting the region between latitudes 28°N and 41°N. This occurred as the lengthening Pacific/North America transform boundary gradually added dextral shear on northwest striking planes to the previous stress field of SW-NE extension.
NASA Technical Reports Server (NTRS)
White, Warren B.; Tai, Chang-Kou; Holland, William R.
1990-01-01
The optimal interpolation method of Lorenc (1981) was used to conduct continuous assimilation of altimetric sea level differences from the simulated Geosat exact repeat mission (ERM) into a three-layer quasi-geostrophic eddy-resolving numerical ocean box model that simulates the statistics of mesoscale eddy activity in the western North Pacific. Assimilation was conducted continuously as the Geosat tracks appeared in simulated real time/space, with each track repeating every 17 days, but occurring at different times and locations within the 17-day period, as would have occurred in a realistic nowcast situation. This interpolation method was also used to conduct the assimilation of referenced altimetric sea level differences into the same model, performing the referencing of altimetric sea sevel differences by using the simulated sea level. The results of this dynamical interpolation procedure are compared with those of a statistical (i.e., optimum) interpolation procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.; ...
2016-12-28
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
SWAT use of gridded observations for simulating runoff - a Vietnam river basin study
NASA Astrophysics Data System (ADS)
Vu, M. T.; Raghavan, S. V.; Liong, S. Y.
2011-12-01
Many research studies that focus on basin hydrology have used the SWAT model to simulate runoff. One common practice in calibrating the SWAT model is the application of station data rainfall to simulate runoff. But over regions lacking robust station data, there is a problem of applying the model to study the hydrological responses. For some countries and remote areas, the rainfall data availability might be a constraint due to many different reasons such as lacking of technology, war time and financial limitation that lead to difficulty in constructing the runoff data. To overcome such a limitation, this research study uses some of the available globally gridded high resolution precipitation datasets to simulate runoff. Five popular gridded observation precipitation datasets: (1) Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE), (2) Tropical Rainfall Measuring Mission (TRMM), (3) Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN), (4) Global Precipitation Climatology Project (GPCP), (5) modified Global Historical Climatology Network version 2 (GHCN2) and one reanalysis dataset National Centers for Environment Prediction/National Center for Atmospheric Research (NCEP/NCAR) are used to simulate runoff over the Dakbla River (a small tributary of the Mekong River) in Vietnam. Wherever possible, available station data are also used for comparison. Bilinear interpolation of these gridded datasets is used to input the precipitation data at the closest grid points to the station locations. Sensitivity Analysis and Auto-calibration are performed for the SWAT model. The Nash-Sutcliffe Efficiency (NSE) and Coefficient of Determination (R2) indices are used to benchmark the model performance. This entails a good understanding of the response of the hydrological model to different datasets and a quantification of the uncertainties in these datasets. Such a methodology is also useful for planning on Rainfall-runoff and even reservoir/river management both at rural and urban scales.
NASA Astrophysics Data System (ADS)
Felkins, Joseph; Holley, Adam
2017-09-01
Determining the average lifetime of a neutron gives information about the fundamental parameters of interactions resulting from the charged weak current. It is also an input for calculations of the abundance of light elements in the early cosmos, which are also directly measured. Experimentalists have devised two major approaches to measure the lifespan of the neutron, the beam experiment, and the bottle experiment. For the bottle experiment, I have designed a computational algorithm based on a numerical technique that interpolates magnetic field values in between measured points. This algorithm produces interpolated fields that satisfy the Maxwell-Heaviside equations for use in a simulation that will investigate the rate of depolarization in magnetic traps used for bottle experiments, such as the UCN τ experiment at Los Alamos National Lab. I will present how UCN depolarization can cause a systematic error in experiments like UCN τ. I will then describe the technique that I use for the interpolation, and will discuss the accuracy of interpolation for changes with the number of measured points and the volume of the interpolated region. Supported by NSF Grant 1553861.
Accuracy of stream habitat interpolations across spatial scales
Sheehan, Kenneth R.; Welsh, Stuart A.
2013-01-01
Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.
An empirical model of electron and ion fluxes derived from observations at geosynchronous orbit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denton, M. H.; Thomsen, M. F.; Jordanova, V. K.
Knowledge of the plasma fluxes at geosynchronous orbit is important to both scientific and operational investigations. We present a new empirical model of the ion flux and the electron flux at geosynchronous orbit (GEO) in the energy range ~1 eV to ~40 keV. The model is based on a total of 82 satellite-years of observations from the Magnetospheric Plasma Analyzer instruments on Los Alamos National Laboratory satellites at GEO. These data are assigned to a fixed grid of 24 local-times and 40 energies, at all possible values of Kp. Bi-linear interpolation is used between grid points to provide the ionmore » flux and the electron flux values at any energy and local-time, and for given values of geomagnetic activity (proxied by the 3-hour Kp index), and also for given values of solar activity (proxied by the daily F10.7 index). Initial comparison of the electron flux from the model with data from a Compact Environmental Anomaly Sensor II (CEASE-II), also located at geosynchronous orbit, indicate a good match during both quiet and disturbed periods. The model is available for distribution as a FORTRAN code that can be modified to suit user-requirements.« less
An empirical model of electron and ion fluxes derived from observations at geosynchronous orbit
Denton, M. H.; Thomsen, M. F.; Jordanova, V. K.; ...
2015-04-01
Knowledge of the plasma fluxes at geosynchronous orbit is important to both scientific and operational investigations. We present a new empirical model of the ion flux and the electron flux at geosynchronous orbit (GEO) in the energy range ~1 eV to ~40 keV. The model is based on a total of 82 satellite-years of observations from the Magnetospheric Plasma Analyzer instruments on Los Alamos National Laboratory satellites at GEO. These data are assigned to a fixed grid of 24 local-times and 40 energies, at all possible values of Kp. Bi-linear interpolation is used between grid points to provide the ionmore » flux and the electron flux values at any energy and local-time, and for given values of geomagnetic activity (proxied by the 3-hour Kp index), and also for given values of solar activity (proxied by the daily F10.7 index). Initial comparison of the electron flux from the model with data from a Compact Environmental Anomaly Sensor II (CEASE-II), also located at geosynchronous orbit, indicate a good match during both quiet and disturbed periods. The model is available for distribution as a FORTRAN code that can be modified to suit user-requirements.« less
NASA Astrophysics Data System (ADS)
Caroti, G.; Camiciottoli, F.; Piemonte, A.; Redini, M.
2013-01-01
The work stems from a joint study between the Laboratory ASTRO (Department of Civil and Industrial Engineering - University of Pisa), the municipality of Pisa and the province of Arezzo on the advanced analysis and use of digital elevation data. Besides, it is framed in the research carried on by ASTRO about the definition of the priority informative layers for emergency management in the territory, as of PRIN 2008. Specifically, this work is in continuity with other already published results concerning rigorous accuracy checks of LIDAR data and testing of the procedures to transform raw data in formats consistent with CTR and survey data. The analysis of sections of riverbed, derived from interpolation by DTMs featuring different grid density with those detected topographically, is presented. Validation by differential GNSS methodology of the DTMs used showed a good overall quality of the model for open, low-sloping areas. Analysis of the sections, however, has shown that the representation of small or high-sloping (ditches, embankments) morphological elements requires a high point density such as in laser scanner surveys, and a small mesh size of the grid. In addition, the correct representation of riverside structures is often hindered by the presence of thick vegetation and poor raw LIDAR data filtering.
Computational aeroelasticity using a pressure-based solver
NASA Astrophysics Data System (ADS)
Kamakoti, Ramji
A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.
Seismic Wave Propagation on the Tablet Computer
NASA Astrophysics Data System (ADS)
Emoto, K.
2015-12-01
Tablet computers widely used in recent years. The performance of the tablet computer is improving year by year. Some of them have performance comparable to the personal computer of a few years ago with respect to the calculation speed and the memory size. The convenience and the intuitive operation are the advantage of the tablet computer compared to the desktop PC. I developed the iPad application of the numerical simulation of the seismic wave propagation. The numerical simulation is based on the 2D finite difference method with the staggered-grid scheme. The number of the grid points is 512 x 384 = 196,608. The grid space is 200m in both horizontal and vertical directions. That is the calculation area is 102km x 77km. The time step is 0.01s. In order to reduce the user waiting time, the image of the wave field is drawn simultaneously with the calculation rather than playing the movie after the whole calculation. P and S wave energies are plotted on the screen every 20 steps (0.2s). There is the trade-off between the smooth simulation and the resolution of the wave field image. In the current setting, it takes about 30s to calculate the 10s wave propagation (50 times image updates). The seismogram at the receiver is displayed below of the wave field updated in real time. The default medium structure consists of 3 layers. The layer boundary is defined by 10 movable points with linear interpolation. Users can intuitively change to the arbitrary boundary shape by moving the point. Also users can easily change the source and the receiver positions. The favorite structure can be saved and loaded. For the advance simulation, users can introduce the random velocity fluctuation whose spectrum can be changed to the arbitrary shape. By using this application, everyone can simulate the seismic wave propagation without the special knowledge of the elastic wave equation. So far, the Japanese version of the application is released on the App Store. Now I am preparing the English version.
Algebraic grid generation with corner singularities
NASA Technical Reports Server (NTRS)
Vinokur, M.; Lombard, C. K.
1983-01-01
A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.
NASA Technical Reports Server (NTRS)
Iyer, V.; Harris, J. E.
1987-01-01
The three-dimensional boundary-layer equations in the limit as the normal coordinate tends to infinity are called the surface Euler equations. The present paper describes an accurate method for generating edge conditions for three-dimensional boundary-layer codes using these equations. The inviscid pressure distribution is first interpolated to the boundary-layer grid. The surface Euler equations are then solved with this pressure field and a prescribed set of initial and boundary conditions to yield the velocities along the two surface coordinate directions. Results for typical wing and fuselage geometries are presented. The smoothness and accuracy of the edge conditions obtained are found to be superior to the conventional interpolation procedures.
NASA Astrophysics Data System (ADS)
Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.
2017-11-01
Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.
Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data
NASA Technical Reports Server (NTRS)
Bose, Tamal
2000-01-01
A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.