Sample records for interpolating implicit surfaces

  1. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  2. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  3. Approximating basins of attraction for dynamical systems via stable radial bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavoretto, R.; De Rossi, A.; Perracchione, E.

    2016-06-08

    In applied sciences it is often required to model and supervise temporal evolution of populations via dynamical systems. In this paper, we focus on the problem of approximating the basins of attraction of such models for each stable equilibrium point. We propose to reconstruct the basins via an implicit interpolant using stable radial bases, obtaining the surfaces by partitioning the phase space into disjoint regions. An application to a competition model presenting jointly three stable equilibria is considered.

  4. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme

    NASA Astrophysics Data System (ADS)

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres

    2007-10-01

    Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.

  5. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  6. Variational data assimilation with a semi-Lagrangian semi-implicit global shallow-water equation model and its adjoint

    NASA Technical Reports Server (NTRS)

    Li, Y.; Navon, I. M.; Courtier, P.; Gauthier, P.

    1993-01-01

    An adjoint model is developed for variational data assimilation using the 2D semi-Lagrangian semi-implicit (SLSI) shallow-water equation global model of Bates et al. with special attention being paid to the linearization of the interpolation routines. It is demonstrated that with larger time steps the limit of the validity of the tangent linear model will be curtailed due to the interpolations, especially in regions where sharp gradients in the interpolated variables coupled with strong advective wind occur, a synoptic situation common in the high latitudes. This effect is particularly evident near the pole in the Northern Hemisphere during the winter season. Variational data assimilation experiments of 'identical twin' type with observations available only at the end of the assimilation period perform well with this adjoint model. It is confirmed that the computational efficiency of the semi-Lagrangian scheme is preserved during the minimization process, related to the variational data assimilation procedure.

  7. Dundee Biennial Conference on Numerical Analysis held at Dundee Univ (United Kingdom) on 23-26 June 1977

    DTIC Science & Technology

    1987-06-26

    a related class of implicit Runge-Kutta-Nystrom methods. The talk will conclude with a look at some ongoing work...formulae with deferred corrections. In order to perform the deferred correction stage efficiently, a special class of formulae, known as Mono-Implicit... of this type ( termed "CBS methods") have been developed which permit a wide variety of convergent interpolations, many of which are unstable in

  8. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  9. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  10. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  11. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi, E-mail: samtaney@pppl.go; Brandt, Achi

    2010-09-01

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations - so-called 'textbook' multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  12. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi; Brandt, Achi

    2010-09-01

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called ‘‘textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss–Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  13. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi; Brandt, Achi

    2013-12-14

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called “textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  14. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  15. A deformable surface model for real-time water drop animation.

    PubMed

    Zhang, Yizhong; Wang, Huamin; Wang, Shuai; Tong, Yiying; Zhou, Kun

    2012-08-01

    A water drop behaves differently from a large water body because of its strong viscosity and surface tension under the small scale. Surface tension causes the motion of a water drop to be largely determined by its boundary surface. Meanwhile, viscosity makes the interior of a water drop less relevant to its motion, as the smooth velocity field can be well approximated by an interpolation of the velocity on the boundary. Consequently, we propose a fast deformable surface model to realistically animate water drops and their flowing behaviors on solid surfaces. Our system efficiently simulates water drop motions in a Lagrangian fashion, by reducing 3D fluid dynamics over the whole liquid volume to a deformable surface model. In each time step, the model uses an implicit mean curvature flow operator to produce surface tension effects, a contact angle operator to change droplet shapes on solid surfaces, and a set of mesh connectivity updates to handle topological changes and improve mesh quality over time. Our numerical experiments demonstrate a variety of physically plausible water drop phenomena at a real-time rate, including capillary waves when water drops collide, pinch-off of water jets, and droplets flowing over solid materials. The whole system performs orders-of-magnitude faster than existing simulation approaches that generate comparable water drop effects.

  16. Numerical Simulation of Hydrodynamics of a Heavy Liquid Drop Covered by Vapor Film in a Water Pool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, W.M.; Yang, Z.L.; Giri, A.

    2002-07-01

    A numerical study on the hydrodynamics of a droplet covered by vapor film in water pool is carried out. Two level set functions are used as to implicitly capture the interfaces among three immiscible fluids (melt-drop, vapor and coolant). This approach leaves only one set of conservation equations for the three phases. A high-order Navier-Stokes solver, called Cubic-Interpolated Pseudo-Particle (CIP) algorithm, is employed in combination with level set approach, which allows large density ratios (up to 1000), surface tension and jump in viscosity. By this calculation, the hydrodynamic behavior of a melt droplet falling into a volatile coolant is simulated,more » which is of great significance to reveal the mechanism of steam explosion during a hypothetical severe reactor accident. (authors)« less

  17. Computational aeroelasticity using a pressure-based solver

    NASA Astrophysics Data System (ADS)

    Kamakoti, Ramji

    A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.

  18. Constructing polyatomic potential energy surfaces by interpolating diabatic Hamiltonian matrices with demonstration on green fluorescent protein chromophore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jae Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr; Department of Chemistry, Pohang University of Science and Technology

    2014-04-28

    Simulating molecular dynamics directly on quantum chemically obtained potential energy surfaces is generally time consuming. The cost becomes overwhelming especially when excited state dynamics is aimed with multiple electronic states. The interpolated potential has been suggested as a remedy for the cost issue in various simulation settings ranging from fast gas phase reactions of small molecules to relatively slow condensed phase dynamics with complex surrounding. Here, we present a scheme for interpolating multiple electronic surfaces of a relatively large molecule, with an intention of applying it to studying nonadiabatic behaviors. The scheme starts with adiabatic potential information and its diabaticmore » transformation, both of which can be readily obtained, in principle, with quantum chemical calculations. The adiabatic energies and their derivatives on each interpolation center are combined with the derivative coupling vectors to generate the corresponding diabatic Hamiltonian and its derivatives, and they are subsequently adopted in producing a globally defined diabatic Hamiltonian function. As a demonstration, we employ the scheme to build an interpolated Hamiltonian of a relatively large chromophore, para-hydroxybenzylidene imidazolinone, in reference to its all-atom analytical surface model. We show that the interpolation is indeed reliable enough to reproduce important features of the reference surface model, such as its adiabatic energies and derivative couplings. In addition, nonadiabatic surface hopping simulations with interpolation yield population transfer dynamics that is well in accord with the result generated with the reference analytic surface. With these, we conclude by suggesting that the interpolation of diabatic Hamiltonians will be applicable for studying nonadiabatic behaviors of sizeable molecules.« less

  19. Machine Learning Estimates of Natural Product Conformational Energies

    PubMed Central

    Rupp, Matthias; Bauer, Matthias R.; Wilcken, Rainer; Lange, Andreas; Reutlinger, Michael; Boeckler, Frank M.; Schneider, Gisbert

    2014-01-01

    Machine learning has been used for estimation of potential energy surfaces to speed up molecular dynamics simulations of small systems. We demonstrate that this approach is feasible for significantly larger, structurally complex molecules, taking the natural product Archazolid A, a potent inhibitor of vacuolar-type ATPase, from the myxobacterium Archangium gephyra as an example. Our model estimates energies of new conformations by exploiting information from previous calculations via Gaussian process regression. Predictive variance is used to assess whether a conformation is in the interpolation region, allowing a controlled trade-off between prediction accuracy and computational speed-up. For energies of relaxed conformations at the density functional level of theory (implicit solvent, DFT/BLYP-disp3/def2-TZVP), mean absolute errors of less than 1 kcal/mol were achieved. The study demonstrates that predictive machine learning models can be developed for structurally complex, pharmaceutically relevant compounds, potentially enabling considerable speed-ups in simulations of larger molecular structures. PMID:24453952

  20. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  1. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  2. A bivariate rational interpolation with a bi-quadratic denominator

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu

    2006-10-01

    In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.

  3. Implicit Three-Dimensional Geo-Modelling Based on HRBF Surface

    NASA Astrophysics Data System (ADS)

    Gou, J.; Zhou, W.; Wu, L.

    2016-10-01

    Three-dimensional (3D) geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D) geological elements remains difficult and time-consuming. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF) to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata) were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. For extended applications in 3D modelling of other kinds of geo-objects, mining ore body models and urban geotechnical engineering stratum models were constructed by this method from drill-hole data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data.

  4. NTS radiological assessment project: comparison of delta-surface interpolation with kriging for the Frenchman Lake region of area 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, T.A. Jr.

    The primary objective of this report is to compare the results of delta surface interpolation with kriging on four large sets of radiological data sampled in the Frenchman Lake region at the Nevada Test Site. The results of kriging, described in Barnes, Giacomini, Reiman, and Elliott, are very similar to those using the delta surface interpolant. The other topic studied is in reducing the number of sample points and obtaining results similar to those using all of the data. The positive results here suggest that great savings of time and money can be made. Furthermore, the delta surface interpolant ismore » viewed as a contour map and as a three dimensional surface. These graphical representations help in the analysis of the large sets of radiological data.« less

  5. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    USGS Publications Warehouse

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.

  6. A split finite element algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1979-01-01

    An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.

  7. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  8. An analytical particle mover for the charge- and energy-conserving, nonlinearly implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2013-08-01

    We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.

  9. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  10. Iterative refinement of implicit boundary models for improved geological feature reproduction

    NASA Astrophysics Data System (ADS)

    Martin, Ryan; Boisvert, Jeff B.

    2017-12-01

    Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.

  11. Steady potential solver for unsteady aerodynamic analyses

    NASA Technical Reports Server (NTRS)

    Hoyniak, Dan

    1994-01-01

    Development of a steady flow solver for use with LINFLO was the objective of this report. The solver must be compatible with LINFLO, be composed of composite mesh, and have transonic capability. The approaches used were: (1) steady flow potential equations written in nonconservative form; (2) Newton's Method; (3) implicit, least-squares, interpolation method to obtain finite difference equations; and (4) matrix inversion routines from LINFLO. This report was given during the NASA LeRC Workshop on Forced Response in Turbomachinery in August of 1993.

  12. Advance Technology Satellites in the Commercial Environment. Volume 2: Final Report

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A forecast of transponder requirements was obtained. Certain assumptions about system configurations are implicit in this process. The factors included are interpolation of baseline year values to produce yearly figures, estimation of satellite capture, effects of peak-hours and the time-zone staggering of peak hours, circuit requirements for acceptable grade of service capacity of satellite transponders, including various compression methods where applicable, and requirements for spare transponders in orbit. The graphical distribution of traffic requirements was estimated.

  13. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  14. 3D Simulation Modeling of the Tooth Wear Process.

    PubMed

    Dai, Ning; Hu, Jian; Liu, Hao

    2015-01-01

    Severe tooth wear is the most common non-caries dental disease, and it can seriously affect oral health. Studying the tooth wear process is time-consuming and difficult, and technological tools are frequently lacking. This paper presents a novel method of digital simulation modeling that represents a new way to study tooth wear. First, a feature extraction algorithm is used to obtain anatomical feature points of the tooth without attrition. Second, after the alignment of non-attrition areas, the initial homogeneous surface is generated by means of the RBF (Radial Basic Function) implicit surface and then deformed to the final homogeneous by the contraction and bounding algorithm. Finally, the method of bilinear interpolation based on Laplacian coordinates between tooth with attrition and without attrition is used to inversely reconstruct the sequence of changes of the 3D tooth morphology during gradual tooth wear process. This method can also be used to generate a process simulation of nonlinear tooth wear by means of fitting an attrition curve to the statistical data of attrition index in a certain region. The effectiveness and efficiency of the attrition simulation algorithm are verified through experimental simulation.

  15. 3D Simulation Modeling of the Tooth Wear Process

    PubMed Central

    Dai, Ning; Hu, Jian; Liu, Hao

    2015-01-01

    Severe tooth wear is the most common non-caries dental disease, and it can seriously affect oral health. Studying the tooth wear process is time-consuming and difficult, and technological tools are frequently lacking. This paper presents a novel method of digital simulation modeling that represents a new way to study tooth wear. First, a feature extraction algorithm is used to obtain anatomical feature points of the tooth without attrition. Second, after the alignment of non-attrition areas, the initial homogeneous surface is generated by means of the RBF (Radial Basic Function) implicit surface and then deformed to the final homogeneous by the contraction and bounding algorithm. Finally, the method of bilinear interpolation based on Laplacian coordinates between tooth with attrition and without attrition is used to inversely reconstruct the sequence of changes of the 3D tooth morphology during gradual tooth wear process. This method can also be used to generate a process simulation of nonlinear tooth wear by means of fitting an attrition curve to the statistical data of attrition index in a certain region. The effectiveness and efficiency of the attrition simulation algorithm are verified through experimental simulation. PMID:26241942

  16. Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.

    PubMed

    Prochazka, Ivan; Panek, Petr

    2009-07-01

    A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.

  17. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  18. Implicit Coupling Approach for Simulation of Charring Carbon Ablators

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Gokcen, Tahir

    2013-01-01

    This study demonstrates that coupling of a material thermal response code and a flow solver with nonequilibrium gas/surface interaction for simulation of charring carbon ablators can be performed using an implicit approach. The material thermal response code used in this study is the three-dimensional version of Fully Implicit Ablation and Thermal response program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation method. Coupling between the material response and flow codes is performed by solving the surface mass balance in flow solver and the surface energy balance in material response code. Thus, the material surface recession is predicted in flow code, and the surface temperature and pyrolysis gas injection rate are computed in material response code. It is demonstrated that the time-lagged explicit approach is sufficient for simulations at low surface heating conditions, in which the surface ablation rate is not a strong function of the surface temperature. At elevated surface heating conditions, the implicit approach has to be taken, because the carbon ablation rate becomes a stiff function of the surface temperature, and thus the explicit approach appears to be inappropriate resulting in severe numerical oscillations of predicted surface temperature. Implicit coupling for simulation of arc-jet models is performed, and the predictions are compared with measured data. Implicit coupling for trajectory based simulation of Stardust fore-body heat shield is also conducted. The predicted stagnation point total recession is compared with that predicted using the chemical equilibrium surface assumption

  19. An adaptive interpolation scheme for molecular potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  20. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  1. Estimation of lunar surface maturity and ferrous oxide from Moon Mineralogy Mapper (M3) data through data interpolation techniques

    NASA Astrophysics Data System (ADS)

    Ajith Kumar, P.; Kumar, Shashi

    2016-04-01

    Surface maturity estimation of the lunar regolith revealed selenological process behind the formation of lunar surface, which might be provided vital information regarding the geological evolution of earth, because lunar surface is being considered as 8-9 times older than as that of the earth. Spectral reflectances data from Moon mineralogy mapper (M3), the hyperspectral sensor of chandrayan-1 coupled with the standard weight percentages of FeO from lunar returned samples of Apollo and Luna landing sites, through data interpolation techniques to generate the weight percentage FeO map of the target lunar locations. With the interpolated data mineral maps were prepared and the results are analyzed.

  2. A Critical Comparison of Some Methods for Interpolation of Scattered Data

    DTIC Science & Technology

    1979-12-01

    because faster evaluation of the local interpolants is possible. KAll things considered, the method of choice here seems to be the Modified Quadratic...topography and other irregular surfaces," J. of Geophysical Research 76 ( 1971 ) 1905-1915I’ [23) HARDY, Rolland L. - "Analytical topographic surfaces by

  3. An adaptive interpolation scheme for molecular potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less

  4. Development and evaluation of a new 3-D digitization and computer graphic system to study the anatomic tissue and restoration surfaces.

    PubMed

    Dastane, A; Vaidyanathan, T K; Vaidyanathan, J; Mehra, R; Hesby, R

    1996-01-01

    It is necessary to visualize and reconstruct tissue anatomic surfaces accurately for a variety of oral rehabilitation applications such as surface wear characterization and automated fabrication of dental restorations, accuracy of reproduction of impression and die materials, etc. In this investigation, a 3-D digitization and computer-graphic system was developed for surface characterization. The hardware consists of a profiler assembly for digitization in an MTS biomechanical test system with an artificial mouth, an IBM PS/2 computer model 70 for data processing and a Hewlett-Packard laser printer for hardcopy outputs. The software used includes a commercially available Surfer 3-D graphics package, a public domain data-fitting alignment software and an inhouse Pascal program for intercommunication plus some other limited tasks. Surfaces were digitized before and after rotation by angular displacement, the digital data were interpolated by Surfer to provide a data grid and the surfaces were computer graphically reconstructed: Misaligned surfaces were aligned by the data-fitting alignment software under different choices of parameters. The effect of different interpolation parameters (e.g. grid size, method of interpolation) and extent of rotation on the alignment accuracy was determined. The results indicate that improved alignment accuracy results from optimization of interpolation parameters and minimization of the initial misorientation between the digitized surfaces. The method provides important advantages for surface reconstruction and visualization, such as overlay of sequentially generated surfaces and accurate alignment of pairs of surfaces with small misalignment.

  5. The Effect of Elevation Bias in Interpolated Air Temperature Data Sets on Surface Warming in China During 1951-2015

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Sun, Fubao; Ge, Quansheng; Kleidon, Axel; Liu, Wenbin

    2018-02-01

    Although gridded air temperature data sets share much of the same observations, different rates of warming can be detected due to different approaches employed for considering elevation signatures in the interpolation processes. Here we examine the influence of varying spatiotemporal distribution of sites on surface warming in the long-term trend and over the recent warming hiatus period in China during 1951-2015. A suspicious cooling trend in raw interpolated air temperature time series is found in the 1950s, and 91% of which can be explained by the artificial elevation changes introduced by the interpolation process. We define the regression slope relating temperature difference and elevation difference as the bulk lapse rate of -5.6°C/km, which tends to be higher (-8.7°C/km) in dry regions but lower (-2.4°C/km) in wet regions. Compared to independent experimental observations, we find that the estimated monthly bulk lapse rates work well to capture the elevation bias. Significant improvement can be achieved in adjusting the interpolated original temperature time series using the bulk lapse rate. The results highlight that the developed bulk lapse rate is useful to account for the elevation signature in the interpolation of site-based surface air temperature to gridded data sets and is necessary for avoiding elevation bias in climate change studies.

  6. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  7. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  8. Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation

    NASA Astrophysics Data System (ADS)

    Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel

    2017-09-01

    Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.

  9. An implicit boundary integral method for computing electric potential of macromolecules in solvent

    NASA Astrophysics Data System (ADS)

    Zhong, Yimin; Ren, Kui; Tsai, Richard

    2018-04-01

    A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.

  10. LADAR Range Image Interpolation Exploiting Pulse Width Expansion

    DTIC Science & Technology

    2012-03-22

    normal to each other. The LADAR model needs to include the complete BRDF model covered in Section 2.1.3, which includes speckle reflection as well as...the gradient of a surface. This study estimates the gradi- ent of the surface of an object from a modeled LADAR return pulse that includes accurate...probabilistic noise models . The range and surface gradient estimations are incorporated into a novel interpolator that facilitates an effective three

  11. A machine learning approach to the potential-field method for implicit modeling of geological structures

    NASA Astrophysics Data System (ADS)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  12. Examples of grid generation with implicitly specified surfaces using GridPro (TM)/az3000. 1: Filleted multi-tube configurations

    NASA Technical Reports Server (NTRS)

    Cheng, Zheming; Eiseman, Peter R.

    1995-01-01

    With examples, we illustrate how implicitly specified surfaces can be used for grid generation with GridPro/az3000. The particular examples address two questions: (1) How do you model intersecting tubes with fillets? and (2) How do you generate grids inside the intersected tubes? The implication is much more general. With the results in a forthcoming paper which develops an easy-to-follow procedure for implicit surface modeling, we provide a powerful means for rapid prototyping in grid generation.

  13. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  14. Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence

    NASA Astrophysics Data System (ADS)

    Belli, Emily Ann

    Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the enhancement of zonal flows with shaping, which is observed with the GS2 simulations. Finally, a local linear trial function-based gyrokinetic code is developed to aid in fast scoping studies of gyrokinetic linear stability. This code is successfully benchmarked with the full GS2 code in the collisionless, electrostatic limit, as well as in the more general electromagnetic description with higher-order Hermite basis functions.

  15. Analysis of warping deformation modes using higher order ANCF beam element

    NASA Astrophysics Data System (ADS)

    Orzechowski, Grzegorz; Shabana, Ahmed A.

    2016-02-01

    Most classical beam theories assume that the beam cross section remains a rigid surface under an arbitrary loading condition. However, in the absolute nodal coordinate formulation (ANCF) continuum-based beams, this assumption can be relaxed allowing for capturing deformation modes that couple the cross-section deformation and beam bending, torsion, and/or elongation. The deformation modes captured by ANCF finite elements depend on the interpolating polynomials used. The most widely used spatial ANCF beam element employs linear approximation in the transverse direction, thereby restricting the cross section deformation and leading to locking problems. The objective of this investigation is to examine the behavior of a higher order ANCF beam element that includes quadratic interpolation in the transverse directions. This higher order element allows capturing warping and non-uniform stretching distribution. Furthermore, this higher order element allows for increasing the degree of continuity at the element interface. It is shown in this paper that the higher order ANCF beam element can be used effectively to capture warping and eliminate Poisson locking that characterizes lower order ANCF finite elements. It is also shown that increasing the degree of continuity requires a special attention in order to have acceptable results. Because higher order elements can be more computationally expensive than the lower order elements, the use of reduced integration for evaluating the stress forces and the use of explicit and implicit numerical integrations to solve the nonlinear dynamic equations of motion are investigated in this paper. It is shown that the use of some of these integration methods can be very effective in reducing the CPU time without adversely affecting the solution accuracy.

  16. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  17. Proceedings of the NASA Workshop on Surface Fitting

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1982-01-01

    Surface fitting techniques and their utilization are addressed. Surface representation, approximation, and interpolation are discussed. Along with statistical estimation problems associated with surface fitting.

  18. New gridded database of clear-sky solar radiation derived from ground-based observations over Europe

    NASA Astrophysics Data System (ADS)

    Bartok, Blanka; Wild, Martin; Sanchez-Lorenzo, Arturo; Hakuba, Maria Z.

    2017-04-01

    Since aerosols modify the entire energy balance of the climate system through different processes, assessments regarding aerosol multiannual variability are highly required by the climate modelling community. Because of the scarcity of long-term direct aerosol measurements, the retrieval of aerosol data/information from other type of observations or satellite measurements are very relevant. One approach frequently used in the literature is analyze of the clear-sky solar radiation which offer a better overview of changes in aerosol content. In the study first two empirical methods are elaborated in order to separate clear-sky situations from observed values of surface solar radiation available at the World Radiation Data Center (WRDC), St. Petersburg. The daily data has been checked for temporal homogeneity by applying the MASH method (Szentimrey, 2003). In the first approach, clear sky situations are detected based on clearness index, namely the ratio of the surface solar radiation to the extraterrestrial solar irradiation. In the second approach the observed values of surface solar radiation are compared to the climatology of clear-sky surface solar radiation calculated by the MAGIC radiation code (Muller et al. 2009). In both approaches the clear-sky radiation values highly depend on the applied thresholds. In order to eliminate this methodological error a verification of clear-sky detection is envisaged through a comparison with the values obtained by a high time resolution clear-sky detection and interpolation algorithm (Long and Ackermann, 2000) making use of the high quality data from the Baseline Surface Radiation Network (BSRN). As the consequences clear-sky data series are obtained for 118 European meteorological stations. Next a first attempt has been done in order to interpolate the point-wise clear-sky radiation data by applying the MISH (Meteorological Interpolation based on Surface Homogenized Data Basis) method for the spatial interpolation of surface meteorological elements developed at the Hungarian Meteorological Service (Szentimrey 2007). In this way new gridded database of clear-sky solar radiation is created suitable for further investigations regarding the role of aerosols in the energy budget, and also for validations of climate model outputs. References 1. Long CN, Ackerman TP. 2000. Identification of clear skies from broadband pyranometer measurements and calculation of downwelling shortwave cloud effects, J. Geophys. Res., 105(D12), 15609-15626, doi:10.1029/2000JD900077. 2. Mueller R, Matsoukas C, Gratzki A, Behr H, Hollmann R. 2009. The CM-SAF operational scheme for the satellite based retrieval of solar surface irradiance - a LUT based eigenvector hybrid approach, Remote Sensing of Environment, 113 (5), 1012-1024, doi:10.1016/j.rse.2009. 01.012 3. Szentimrey T. 2014. Multiple Analysis of Series for Homogenization (MASHv3.03), Hungarian Meteorological Service, https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/ 4. Szentimrey T. Bihari Z. 2014: Meteorological Interpolation based on Surface Homogenized Data Basis (MISHv1.03) https://www.met.hu/en/omsz/rendezvenyek/homogenization_and_interpolation/software/

  19. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  20. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2014-12-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  1. A semi-implicit finite element method for viscous lipid membranes

    NASA Astrophysics Data System (ADS)

    Rodrigues, Diego S.; Ausas, Roberto F.; Mut, Fernando; Buscaglia, Gustavo C.

    2015-10-01

    A finite element formulation to approximate the behavior of lipid membranes is proposed. The mathematical model incorporates tangential viscous stresses and bending elastic forces, together with the inextensibility constraint and the enclosed volume constraint. The membrane is discretized by a surface mesh made up of planar triangles, over which a mixed formulation (velocity-curvature) is built based on the viscous bilinear form (Boussinesq-Scriven operator) and the Laplace-Beltrami identity relating position and curvature. A semi-implicit approach is then used to discretize in time, with piecewise linear interpolants for all variables. Two stabilization terms are needed: The first one stabilizes the inextensibility constraint by a pressure-gradient-projection scheme (Codina and Blasco (1997) [33]), the second couples curvature and velocity to improve temporal stability, as proposed by Bänsch (2001) [36]. The volume constraint is handled by a Lagrange multiplier (which turns out to be the internal pressure), and an analogous strategy is used to filter out rigid-body motions. The nodal positions are updated in a Lagrangian manner according to the velocity solution at each time step. An automatic remeshing strategy maintains suitable refinement and mesh quality throughout the simulation. Numerical experiments show the convergent and robust behavior of the proposed method. Stability limits are obtained from numerous relaxation tests, and convergence with mesh refinement is confirmed both in the relaxation transient and in the final equilibrium shape. Virtual tweezing experiments are also reported, computing the dependence of the deformed membrane shape with the tweezing velocity (a purely dynamical effect). For sufficiently high velocities, a tether develops which shows good agreement, both in its final radius and in its transient behavior, with available analytical solutions. Finally, simulation results of a membrane subject to the simultaneous action of six tweezers illustrate the robustness of the method.

  2. Jacobi-Gauss-Lobatto collocation method for the numerical solution of 1+1 nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.

    2014-03-01

    A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.

  3. Data-Driven Haptic Modeling and Rendering of Viscoelastic and Frictional Responses of Deformable Objects.

    PubMed

    Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon

    2016-01-01

    In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.

  4. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  5. Development of a Boundary Layer Property Interpolation Tool in Support of Orbiter Return To Flight

    NASA Technical Reports Server (NTRS)

    Greene, Francis A.; Hamilton, H. Harris

    2006-01-01

    A new tool was developed to predict the boundary layer quantities required by several physics-based predictive/analytic methods that assess damaged Orbiter tile. This new tool, the Boundary Layer Property Prediction (BLPROP) tool, supplies boundary layer values used in correlations that determine boundary layer transition onset and surface heating-rate augmentation/attenuation factors inside tile gouges (i.e. cavities). BLPROP interpolates through a database of computed solutions and provides boundary layer and wall data (delta, theta, Re(sub theta)/M(sub e), Re(sub theta)/M(sub e), Re(sub theta), P(sub w), and q(sub w)) based on user input surface location and free stream conditions. Surface locations are limited to the Orbiter s windward surface. Constructed using predictions from an inviscid w/boundary-layer method and benchmark viscous CFD, the computed database covers the hypersonic continuum flight regime based on two reference flight trajectories. First-order one-dimensional Lagrange interpolation accounts for Mach number and angle-of-attack variations, whereas non-dimensional normalization accounts for differences between the reference and input Reynolds number. Employing the same computational methods used to construct the database, solutions at other trajectory points taken from previous STS flights were computed: these results validate the BLPROP algorithm. Percentage differences between interpolated and computed values are presented and are used to establish the level of uncertainty of the new tool.

  6. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  7. Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets

    PubMed Central

    Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.

    2011-01-01

    Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227

  8. An operator calculus for surface and volume modeling

    NASA Technical Reports Server (NTRS)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  9. Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation

    PubMed Central

    Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed

    2012-01-01

    This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186

  10. Spatial Interpolation of Fine Particulate Matter Concentrations Using the Shortest Wind-Field Path Distance

    PubMed Central

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197

  11. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

    PubMed

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

  12. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  13. An Extended Kriging Method to Interpolate Near-Surface Soil Moisture Data Measured by Wireless Sensor Networks

    PubMed Central

    Zhang, Jialin; Li, Xiuhong; Yang, Rongjin; Liu, Qiang; Zhao, Long; Dou, Baocheng

    2017-01-01

    In the practice of interpolating near-surface soil moisture measured by a wireless sensor network (WSN) grid, traditional Kriging methods with auxiliary variables, such as Co-kriging and Kriging with external drift (KED), cannot achieve satisfactory results because of the heterogeneity of soil moisture and its low correlation with the auxiliary variables. This study developed an Extended Kriging method to interpolate with the aid of remote sensing images. The underlying idea is to extend the traditional Kriging by introducing spectral variables, and operating on spatial and spectral combined space. The algorithm has been applied to WSN-measured soil moisture data in HiWATER campaign to generate daily maps from 10 June to 15 July 2012. For comparison, three traditional Kriging methods are applied: Ordinary Kriging (OK), which used WSN data only, Co-kriging and KED, both of which integrated remote sensing data as covariate. Visual inspections indicate that the result from Extended Kriging shows more spatial details than that of OK, Co-kriging, and KED. The Root Mean Square Error (RMSE) of Extended Kriging was found to be the smallest among the four interpolation results. This indicates that the proposed method has advantages in combining remote sensing information and ground measurements in soil moisture interpolation. PMID:28617351

  14. A practical implementation of wave front construction for 3-D isotropic media

    NASA Astrophysics Data System (ADS)

    Chambers, K.; Kendall, J.-M.

    2008-06-01

    Wave front construction (WFC) methods are a useful tool for tracking wave fronts and are a natural extension to standard ray shooting methods. Here we describe and implement a simple WFC method that is used to interpolate wavefield properties throughout a 3-D heterogeneous medium. Our approach differs from previous 3-D WFC procedures primarily in the use of a ray interpolation scheme, based on approximating the wave front as a `locally spherical' surface and a `first arrival mode', which reduces computation times, where only first arrivals are required. Both of these features have previously been included in 2-D WFC algorithms; however, until now they have not been extended to 3-D systems. The wave front interpolation scheme allows for rays to be traced from a nearly arbitrary distribution of take-off angles, and the calculation of derivatives with respect to take-off angles is not required for wave front interpolation. However, in regions of steep velocity gradient, the locally spherical approximation is not valid, and it is necessary to backpropagate rays to a sufficiently homogenous region before interpolation of the new ray. Our WFC technique is illustrated using a realistic velocity model, based on a North Sea oil reservoir. We examine wavefield quantities such as traveltimes, ray angles, source take-off angles and geometrical spreading factors, all of which are interpolated on to a regular grid. We compare geometrical spreading factors calculated using two methods: using the ray Jacobian and by taking the ratio of a triangular area of wave front to the corresponding solid angle at the source. The results show that care must be taken when using ray Jacobians to calculate geometrical spreading factors, as the poles of the source coordinate system produce unreliable values, which can be spread over a large area, as only a few initial rays are traced in WFC. We also show that the use of the first arrival mode can reduce computation time by ~65 per cent, with the accuracy of the interpolated traveltimes, ray angles and source take-off angles largely unchanged. However, the first arrival mode does lead to inaccuracies in interpolated angles near caustic surfaces, as well as small variations in geometrical spreading factors for ray tubes that have passed through caustic surfaces.

  15. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz; Boocock, Mark G.

    2016-03-15

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhancesmore » volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume estimation of a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Furthermore, the performance of the segmentation method used for the extraction of the femoral, tibial, and patellar cartilages is assessed with a Dice similarity coefficient, sensitivity, and specificity measure providing high agreement to manual segmentation. Conclusions: The CI-RBF method provides a fast, accurate, and robust 3D model reconstruction that matches Carr’s RBF method, 3D DOCTOR, and a manual benchmark method in accuracy and significantly improves upon Carr’s RBF method in data requirement and computational speed. In addition, the visualization tool has been designed to quickly segment MR images requiring only four mouse clicks per MR image slice.« less

  16. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  17. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  18. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    NASA Astrophysics Data System (ADS)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  19. VizieR Online Data Catalog: New atmospheric parameters of MILES cool stars (Sharma+, 2016)

    NASA Astrophysics Data System (ADS)

    Sharma, K.; Prugniel, P.; Singh, H. P.

    2015-11-01

    MILES V2 spectral interpolator The FITS file is an improved version of MILES interpolator previously presented in PVK. It contains the coefficients of the interpolator, which allows one to compute an interpolated spectrum, giving an effective temperature, log of surface gravity and metallicity (Teff, logg, and [Fe/H]). The file consists of three extensions containing the three temperature regimes described in the paper. Extension Teff range 0 warm 4000-9000K 1 hot >7000K 2 cold <4550K The three functions are linearly interpolated in the Teff overlapping regions. Each extension contains a 2D image-type array, whose first axis is the wavelength described by a WCS (Air wavelength, starting at 3536Å, step=0.9Å). This FITS file can be used by the ULySS v1.3 or higher. (5 data files).

  20. The effect of blurred plot coordinates on interpolating forest biomass: a case study

    Treesearch

    J. W. Coulston

    2004-01-01

    Interpolated surfaces of forest attributes are important analytical tools and have been used in risk assessments, forest inventories, and forest health assessments. The USDA Forest Service Forest Inventory and Analysis program (FIA) annually collects information on forest attributes in a consistent fashion nation-wide. Users of these data typically perform...

  1. An objective isobaric/isentropic technique for upper air analysis

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Endlich, R. M.; Ehernberger, L. J.

    1981-01-01

    An objective meteorological analysis technique is presented whereby both horizontal and vertical upper air analyses are performed. The process used to interpolate grid-point values from the upper-air station data is the same as for grid points on both an isobaric surface and a vertical cross-sectional plane. The nearby data surrounding each grid point are used in the interpolation by means of an anisotropic weighting scheme, which is described. The interpolation for a grid-point potential temperature is performed isobarically; whereas wind, mixing-ratio, and pressure height values are interpolated from data that lie on the isentropic surface that passes through the grid point. Two versions (A and B) of the technique are evaluated by qualitatively comparing computer analyses with subjective handdrawn analyses. The objective products of version A generally have fair correspondence with the subjective analyses and with the station data, and depicted the structure of the upper fronts, tropopauses, and jet streams fairly well. The version B objective products correspond more closely to the subjective analyses, and show the same strong gradients across the upper front with only minor smoothing.

  2. Using Chebyshev polynomial interpolation to improve the computational efficiency of gravity models near an irregularly-shaped asteroid

    NASA Astrophysics Data System (ADS)

    Hu, Shou-Cun; Ji, Jiang-Hui

    2017-12-01

    In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.

  3. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  4. A Residual Kriging method for the reconstruction of 3D high-resolution meteorological fields from airborne and surface observations

    NASA Astrophysics Data System (ADS)

    Laiti, Lavinia; Zardi, Dino; de Franceschi, Massimiliano; Rampanelli, Gabriele

    2013-04-01

    Manned light aircrafts and remotely piloted aircrafts represent very valuable and flexible measurement platforms for atmospheric research, as they are able to provide high temporal and spatial resolution observations of the atmosphere above the ground surface. In the present study the application of a geostatistical interpolation technique called Residual Kriging (RK) is proposed for the mapping of airborne measurements of scalar quantities over regularly spaced 3D grids. In RK the dominant (vertical) trend component underlying the original data is first extracted to filter out local anomalies, then the residual field is separately interpolated and finally added back to the trend; the determination of the interpolation weights relies on the estimate of the characteristic covariance function of the residuals, through the computation and modelling of their semivariogram function. RK implementation also allows for the inference of the characteristic spatial scales of variability of the target field and its isotropization, and for an estimate of the interpolation error. The adopted test-bed database consists in a series of flights of an instrumented motorglider exploring the atmosphere of two valleys near the city of Trento (in the southeastern Italian Alps), performed on fair-weather summer days. RK method is used to reconstruct fully 3D high-resolution fields of potential temperature and mixing ratio for specific vertical slices of the valley atmosphere, integrating also ground-based measurements from the nearest surface weather stations. From RK-interpolated meteorological fields, fine-scale features of the atmospheric boundary layer developing over the complex valley topography in connection with the occurrence of thermally-driven slope and valley winds, are detected. The performance of RK mapping is also tested against two other commonly adopted interpolation methods, i.e. the Inverse Distance Weighting and the Delaunay triangulation methods, comparing the results of a cross-validation procedure.

  5. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  6. Solution of the surface Euler equations for accurate three-dimensional boundary-layer analysis of aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Iyer, V.; Harris, J. E.

    1987-01-01

    The three-dimensional boundary-layer equations in the limit as the normal coordinate tends to infinity are called the surface Euler equations. The present paper describes an accurate method for generating edge conditions for three-dimensional boundary-layer codes using these equations. The inviscid pressure distribution is first interpolated to the boundary-layer grid. The surface Euler equations are then solved with this pressure field and a prescribed set of initial and boundary conditions to yield the velocities along the two surface coordinate directions. Results for typical wing and fuselage geometries are presented. The smoothness and accuracy of the edge conditions obtained are found to be superior to the conventional interpolation procedures.

  7. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    NASA Astrophysics Data System (ADS)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.

  8. Anatomy structure creation and editing using 3D implicit surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hibbard, Lyndon S.

    2012-05-15

    Purpose: To accurately reconstruct, and interactively reshape 3D anatomy structures' surfaces using small numbers of 2D contours drawn in the most visually informative views of 3D imagery. The innovation of this program is that the number of 2D contours can be very much smaller than the number of transverse sections, even for anatomy structures spanning many sections. This program can edit 3D structures from prior segmentations, including those from autosegmentation programs. The reconstruction and surface editing works with any image modality. Methods: Structures are represented by variational implicit surfaces defined by weighted sums of radial basis functions (RBFs). Such surfacesmore » are smooth, continuous, and closed and can be reconstructed with RBFs optimally located to efficiently capture shape in any combination of transverse (T), sagittal (S), and coronal (C) views. The accuracy of implicit surface reconstructions was measured by comparisons with the corresponding expert-contoured surfaces in 103 prostate cancer radiotherapy plans. Editing a pre-existing surface is done by overdrawing its profiles in image views spanning the affected part of the structure, deleting an appropriate set of prior RBFs, and merging the remainder with the new edit contour RBFs. Two methods were devised to identify RBFs to be deleted based only on the geometry of the initial surface and the locations of the new RBFs. Results: Expert-contoured surfaces were compared with implicit surfaces reconstructed from them over varying numbers and combinations of T/S/C planes. Studies revealed that surface-surface agreement increases monotonically with increasing RBF-sample density, and that the rate of increase declines over the same range. These trends were observed for all surface agreement metrics and for all the organs studied--prostate, bladder, and rectum. In addition, S and C contours may convey more shape information than T views for CT studies in which the axial slice thickness is greater than the pixel size. Surface editing accuracy likewise improves with larger sampling densities, and the rate of improvement similarly declines over the same conditions. Conclusions: Implicit surfaces based on RBFs are accurate representations of anatomic structures and can be interactively generated or modified to correct segmentation errors. The number of input contours is typically smaller than the number of T contours spanned by the structure.« less

  9. Spatiotemporal Interpolation of Elevation Changes Derived from Satellite Altimetry for Jakobshavn Isbrae, Greenland

    NASA Technical Reports Server (NTRS)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sorensen, L. S.; Joughin, I. R.; Davis, C. H.; Krabill, W. B.

    2012-01-01

    Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dHdt) data over the area of interest. The largest dHdt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dHdt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbr, an outlet glacier for which widespread airborne validation data are available from NASAs Airborne Topographic Mapper (ATM). The four methods are ordinary kriging (OK), kriging with external drift (KED), where the spatial pattern of surface velocity is used as a proxy for that of dHdt, and their spatiotemporal equivalents (ST-OK and ST-KED).

  10. Sampling and Visualizing Creases with Scale-Space Particles

    PubMed Central

    Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik

    2010-01-01

    Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216

  11. Blue and red shifted temperature dependence of implicit phonon shifts in graphene

    NASA Astrophysics Data System (ADS)

    Mann, Sarita; Jindal, V. K.

    2017-07-01

    We have calculated the implicit shift for various modes of frequency in a pure graphene sheet. Thermal expansion and Grüneisen parameter which are required for implicit shift calculation have already been studied and reported. For this calculation, phonon frequencies are obtained using force constants derived from dynamical matrix calculated using VASP code where the density functional perturbation theory (DFPT) is used in interface with phonopy software. The implicit phonon shift shows an unusual behavior as compared to the bulk materials. The frequency shift is large negative (red shift) for ZA and ZO modes and the value of negative shift increases with increase in temperature. On the other hand, blue shift arises for all other longitudinal and transverse modes with a similar trend of increase with increase in temperature. The q dependence of phonon shifts has also been studied. Such simultaneous red and blue shifts in transverse or out plane modes and surface modes, respectively leads to speculation of surface softening in out of plane direction in preference to surface melting.

  12. Creating a monthly time series of the potentiometric surface in the Upper Floridan aquifer, Northern Tampa Bay area, Florida, January 2000-December 2009

    USGS Publications Warehouse

    Lee, Terrie M.; Fouad, Geoffrey G.

    2014-01-01

    In Florida’s karst terrain, where groundwater and surface waters interact, a mapping time series of the potentiometric surface in the Upper Floridan aquifer offers a versatile metric for assessing the hydrologic condition of both the aquifer and overlying streams and wetlands. Long-term groundwater monitoring data were used to generate a monthly time series of potentiometric surfaces in the Upper Floridan aquifer over a 573-square-mile area of west-central Florida between January 2000 and December 2009. Recorded groundwater elevations were collated for 260 groundwater monitoring wells in the Northern Tampa Bay area, and a continuous time series of daily observations was created for 197 of the wells by estimating missing daily values through regression relations with other monitoring wells. Kriging was used to interpolate the monthly average potentiometric-surface elevation in the Upper Floridan aquifer over a decade. The mapping time series gives spatial and temporal coherence to groundwater monitoring data collected continuously over the decade by three different organizations, but at various frequencies. Further, the mapping time series describes the potentiometric surface beneath parts of six regionally important stream watersheds and 11 municipal well fields that collectively withdraw about 90 million gallons per day from the Upper Floridan aquifer. Monthly semivariogram models were developed using monthly average groundwater levels at wells. Kriging was used to interpolate the monthly average potentiometric-surface elevations and to quantify the uncertainty in the interpolated elevations. Drawdown of the potentiometric surface within well fields was likely the cause of a characteristic decrease and then increase in the observed semivariance with increasing lag distance. This characteristic made use of the hole effect model appropriate for describing the monthly semivariograms and the interpolated surfaces. Spatial variance reflected in the monthly semivariograms decreased markedly between 2002 and 2003, timing that coincided with decreases in well-field pumping. Cross-validation results suggest that the kriging interpolation may smooth over the drawdown of the potentiometric surface near production wells. The groundwater monitoring network of 197 wells yielded an average kriging error in the potentiometric-surface elevations of 2 feet or less over approximately 70 percent of the map area. Additional data collection within the existing monitoring network of 260 wells and near selected well fields could reduce the error in individual months. Reducing the kriging error in other areas would require adding new monitoring wells. Potentiometric-surface elevations fluctuated by as much as 30 feet over the study period, and the spatially averaged elevation for the entire surface rose by about 2 feet over the decade. Monthly potentiometric-surface elevations describe the lateral groundwater flow patterns in the aquifer and are usable at a variety of spatial scales to describe vertical groundwater recharge and discharge conditions for overlying surface-water features.

  13. An efficient fully-implicit multislope MUSCL method for multiphase flow with gravity in discrete fractured media

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-06-01

    The first-order methods commonly employed in reservoir simulation for computing the convective fluxes introduce excessive numerical diffusion leading to severe smoothing of displacement fronts. We present a fully-implicit cell-centered finite-volume (CCFV) framework that can achieve second-order spatial accuracy on smooth solutions, while at the same time maintain robustness and nonlinear convergence performance. A novel multislope MUSCL method is proposed to construct the required values at edge centroids in a straightforward and effective way by taking advantage of the triangular mesh geometry. In contrast to the monoslope methods in which a unique limited gradient is used, the multislope concept constructs specific scalar slopes for the interpolations on each edge of a given element. Through the edge centroids, the numerical diffusion caused by mesh skewness is reduced, and optimal second order accuracy can be achieved. Moreover, an improved smooth flux-limiter is introduced to ensure monotonicity on non-uniform meshes. The flux-limiter provides high accuracy without degrading nonlinear convergence performance. The CCFV framework is adapted to accommodate a lower-dimensional discrete fracture-matrix (DFM) model. Several numerical tests with discrete fractured system are carried out to demonstrate the efficiency and robustness of the numerical model.

  14. Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Ho; Leung, Dennis Y. C.

    2006-02-01

    This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.

  15. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  16. Implicit mesh discontinuous Galerkin methods and interfacial gauge methods for high-order accurate interface dynamics, with applications to surface tension dynamics, rigid body fluid-structure interaction, and free surface flow: Part I

    NASA Astrophysics Data System (ADS)

    Saye, Robert

    2017-09-01

    In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free surface flow. A class of techniques known as interfacial gauge methods is adopted to solve the corresponding incompressible Navier-Stokes equations, which, compared to archetypical projection methods, have a weaker coupling between fluid velocity, pressure, and interface position, and allow high-order accurate numerical methods to be developed more easily. Convergence analyses conducted throughout the work demonstrate high-order accuracy in the maximum norm for all of the applications considered; for example, fourth-order spatial accuracy in fluid velocity, pressure, and interface location is demonstrated for surface tension-driven two phase flow in 2D and 3D. Specific application examples include: vortex shedding in nontrivial geometry, capillary wave dynamics revealing fine-scale flow features, falling rigid bodies tumbling in unsteady flow, and free surface flow over a submersed obstacle, as well as high Reynolds number soap bubble oscillation dynamics and vortex shedding induced by a type of Plateau-Rayleigh instability in water ripple free surface flow. These last two examples compare numerical results with experimental data and serve as an additional means of validation; they also reveal physical phenomena not visible in the experiments, highlight how small-scale interfacial features develop and affect macroscopic dynamics, and demonstrate the wide range of spatial scales often at play in interfacial fluid flow.

  17. Implicit mesh discontinuous Galerkin methods and interfacial gauge methods for high-order accurate interface dynamics, with applications to surface tension dynamics, rigid body fluid-structure interaction, and free surface flow: Part II

    NASA Astrophysics Data System (ADS)

    Saye, Robert

    2017-09-01

    In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free surface flow. A class of techniques known as interfacial gauge methods is adopted to solve the corresponding incompressible Navier-Stokes equations, which, compared to archetypical projection methods, have a weaker coupling between fluid velocity, pressure, and interface position, and allow high-order accurate numerical methods to be developed more easily. Convergence analyses conducted throughout the work demonstrate high-order accuracy in the maximum norm for all of the applications considered; for example, fourth-order spatial accuracy in fluid velocity, pressure, and interface location is demonstrated for surface tension-driven two phase flow in 2D and 3D. Specific application examples include: vortex shedding in nontrivial geometry, capillary wave dynamics revealing fine-scale flow features, falling rigid bodies tumbling in unsteady flow, and free surface flow over a submersed obstacle, as well as high Reynolds number soap bubble oscillation dynamics and vortex shedding induced by a type of Plateau-Rayleigh instability in water ripple free surface flow. These last two examples compare numerical results with experimental data and serve as an additional means of validation; they also reveal physical phenomena not visible in the experiments, highlight how small-scale interfacial features develop and affect macroscopic dynamics, and demonstrate the wide range of spatial scales often at play in interfacial fluid flow.

  18. Summary on several key techniques in 3D geological modeling.

    PubMed

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  19. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  20. Kinematic Structural Modelling in Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.

    2017-04-01

    We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.

  1. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  2. Morphing of spatial objects in real time with interpolation by functions of radial and orthogonal basis

    NASA Astrophysics Data System (ADS)

    Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai

    2018-05-01

    The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.

  3. Detailed Aerodynamic Analysis of a Shrouded Tail Rotor Using an Unstructured Mesh Flow Solver

    NASA Astrophysics Data System (ADS)

    Lee, Hee Dong; Kwon, Oh Joon

    The detailed aerodynamics of a shrouded tail rotor in hover has been numerically studied using a parallel inviscid flow solver on unstructured meshes. The numerical method is based on a cell-centered finite-volume discretization and an implicit Gauss-Seidel time integration. The calculation was made for a single blade by imposing a periodic boundary condition between adjacent rotor blades. The grid periodicity was also imposed at the periodic boundary planes to avoid numerical inaccuracy resulting from solution interpolation. The results were compared with available experimental data and those from a disk vortex theory for validation. It was found that realistic three-dimensional modeling is important for the prediction of detailed aerodynamics of shrouded rotors including the tip clearance gap flow.

  4. Definition and verification of a complex aircraft for aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1986-01-01

    Techniques are reviewed which are of value in CAD/CAM CFD studies of the geometries of new fighter aircraft. In order to refine the computations of the flows to take advantage of the computing power available from supercomputers, it is often necessary to interpolate the geometry of the mesh selected for the numerical analysis of the aircraft shape. Interpolating the geometry permits a higher level of detail in calculations of the flow past specific regions of a design. A microprocessor-based mathematics engine is described for fast image manipulation and rotation to verify that the interpolated geometry will correspond to the design geometry in order to ensure that the flow calculations will remain valid through the interpolation. Applications of the image manipulation system to verify geometrical representations with wire-frame and shaded-surface images are described.

  5. New families of interpolating type IIB backgrounds

    NASA Astrophysics Data System (ADS)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  6. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  7. Efficient and Adaptive Methods for Computing Accurate Potential Surfaces for Quantum Nuclear Effects: Applications to Hydrogen-Transfer Reactions.

    PubMed

    DeGregorio, Nicole; Iyengar, Srinivasan S

    2018-01-09

    We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen-bonded systems is demonstrated here.

  8. Summary on Several Key Techniques in 3D Geological Modeling

    PubMed Central

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029

  9. Surface electric fields for North America during historical geomagnetic storms

    USGS Publications Warehouse

    Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.

    2013-01-01

    To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.

  10. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Space Weather Activities of IONOLAB Group: TEC Mapping

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.

    2009-04-01

    Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.

  12. Implicit solvation model for density-functional study of nanocrystal surfaces and reaction pathways

    NASA Astrophysics Data System (ADS)

    Mathew, Kiran; Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2014-02-01

    Solid-liquid interfaces are at the heart of many modern-day technologies and provide a challenge to many materials simulation methods. A realistic first-principles computational study of such systems entails the inclusion of solvent effects. In this work, we implement an implicit solvation model that has a firm theoretical foundation into the widely used density-functional code Vienna ab initio Software Package. The implicit solvation model follows the framework of joint density functional theory. We describe the framework, our algorithm and implementation, and benchmarks for small molecular systems. We apply the solvation model to study the surface energies of different facets of semiconducting and metallic nanocrystals and the SN2 reaction pathway. We find that solvation reduces the surface energies of the nanocrystals, especially for the semiconducting ones and increases the energy barrier of the SN2 reaction.

  13. Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations

    NASA Astrophysics Data System (ADS)

    Loseille, A.; Dervieux, A.; Alauzet, F.

    2010-04-01

    This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.

  14. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  15. Transactions of The Army Conference on Applied Mathematics and Computing (5th) Held in West Point, New York on 15-18 June 1987

    DTIC Science & Technology

    1988-03-01

    29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least

  16. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  17. Climate applications for NOAA 1/4° Daily Optimum Interpolation Sea Surface Temperature

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Banzon, P. V. F.; Liu, G.; Saha, K.; Wilson, C.; Stachniewicz, J. S.

    2015-12-01

    Few sea surface temperature (SST) datasets from satellites have the long temporal span needed for climate studies. The NOAA Daily Optimum Interpolation Sea Surface Temperature (DOISST) on a 1/4° grid, produced at National Centers for Environmental Information, is based primarily on SSTs from the Advanced Very High Resolution Radiometer (AVHRR), available from 1981 to the present. AVHRR data can contain biases, particularly when aerosols are present. Over the three decade span, the largest departure of AVHRR SSTs from buoy temperatures occurred during the Mt Pinatubo and El Chichon eruptions. Therefore, in DOISST, AVHRR SSTs are bias-adjusted to match in situ SSTs prior to interpolation. This produces a consistent time series of complete SST fields that is suitable for modelling and investigating local climate phenomena like El Nino or the Pacific warm blob in a long term context. Because many biological processes and animal distributions are temperature dependent, there are also many ecological uses of DOISST (e.g., coral bleaching thermal stress, fish and marine mammal distributions), thereby providing insights into resource management in a changing ocean. The advantages and limitations of using DOISST for different applications will be discussed.

  18. Using geographical information systems and cartograms as a health service quality improvement tool.

    PubMed

    Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek

    2014-07-01

    Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  20. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  1. Evaluation of rainfall structure on hydrograph simulation: Comparison of radar and interpolated methods, a study case in a tropical catchment

    NASA Astrophysics Data System (ADS)

    Velasquez, N.; Ochoa, A.; Castillo, S.; Hoyos Ortiz, C. D.

    2017-12-01

    The skill of river discharge simulation using hydrological models strongly depends on the quality and spatio-temporal representativeness of precipitation during storm events. All precipitation measurement strategies have their own strengths and weaknesses that translate into discharge simulation uncertainties. Distributed hydrological models are based on evolving rainfall fields in the same time scale as the hydrological simulation. In general, rainfall measurements from a dense and well maintained rain gauge network provide a very good estimation of the total volume for each rainfall event, however, the spatial structure relies on interpolation strategies introducing considerable uncertainty in the simulation process. On the other hand, rainfall retrievals from radar reflectivity achieve a better spatial structure representation but with higher uncertainty in the surface precipitation intensity and volume depending on the vertical rainfall characteristics and radar scan strategy. To assess the impact of both rainfall measurement methodologies on hydrological simulations, and in particular the effects of the rainfall spatio-temporal variability, a numerical modeling experiment is proposed including the use of a novel QPE (Quantitative Precipitation Estimation) method based on disdrometer data in order to estimate surface rainfall from radar reflectivity. The experiment is based on the simulation of 84 storms, the hydrological simulations are carried out using radar QPE and two different interpolation methods (IDW and TIN), and the assessment of simulated peak flow. Results show significant rainfall differences between radar QPE and the interpolated fields, evidencing a poor representation of storms in the interpolated fields, which tend to miss the precise location of the intense precipitation cores, and to artificially generate rainfall in some areas of the catchment. Regarding streamflow modelling, the potential improvement achieved by using radar QPE depends on the density of the rain gauge network and its distribution relative to the precipitation events. The results for the 84 storms show a better model skill using radar QPE than the interpolated fields. Results using interpolated fields are highly affected by the dominant rainfall type and the basin scale.

  2. Method of Determining the Aerodynamic Characteristics of a Flying Vehicle from the Surface Pressure

    NASA Astrophysics Data System (ADS)

    Volkov, V. F.; Dyad'kin, A. A.; Zapryagaev, V. I.; Kiselev, N. P.

    2017-11-01

    The paper presents a description of the procedure used for determining the aerodynamic characteristics (forces and moments acting on a model of a flying vehicle) obtained from the results of pressure measurements on the surface of a model of a re-entry vehicle with operating retrofire brake rockets in the regime of hovering over a landing surface is given. The algorithm for constructing the interpolation polynomial over interpolation nodes in the radial and azimuthal directions using the assumption on the symmetry of pressure distribution over the surface is presented. The aerodynamic forces and moments at different tilts of the vehicle are obtained. It is shown that the aerodynamic force components acting on the vehicle in the regime of landing and caused by the action of the vertical velocity deceleration nozzle jets are negligibly small in comparison with the engine thrust.

  3. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  4. The influence of linguistic and cognitive factors on the time course of verb-based implicit causality.

    PubMed

    Koornneef, Arnout; Dotlačil, Jakub; van den Broek, Paul; Sanders, Ted

    2016-01-01

    In three eye-tracking experiments the influence of the Dutch causal connective "want" (because) and the working memory capacity of readers on the usage of verb-based implicit causality was examined. Experiments 1 and 2 showed that although a causal connective is not required to activate implicit causality information during reading, effects of implicit causality surfaced more rapidly and were more pronounced when a connective was present in the discourse than when it was absent. In addition, Experiment 3 revealed that-in contrast to previous claims-the activation of implicit causality is not a resource-consuming mental operation. Moreover, readers with higher and lower working memory capacities behaved differently in a dual-task situation. Higher span readers were more likely to use implicit causality when they had all their working memory resources at their disposal. Lower span readers showed the opposite pattern as they were more likely to use the implicit causality cue in the case of an additional working memory load. The results emphasize that both linguistic and cognitive factors mediate the impact of implicit causality on text comprehension. The implications of these results are discussed in terms of the ongoing controversies in the literature-that is, the focusing-integration debate and the debates on the source of implicit causality.

  5. Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014

    NASA Astrophysics Data System (ADS)

    Junod, R.; Christy, J. R.

    2016-12-01

    Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).

  6. Estimation of water surface elevations for the Everglades, Florida

    USGS Publications Warehouse

    Palaseanu, Monica; Pearlstine, Leonard

    2008-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring gages and modeling methods that provides scientists and managers with current (2000–present) online water surface and water depth information for the freshwater domain of the Greater Everglades. This integrated system presents data on a 400-m square grid to assist in (1) large-scale field operations; (2) integration of hydrologic and ecologic responses; (3) supporting biological and ecological assessment of the implementation of the Comprehensive Everglades Restoration Plan (CERP); and (4) assessing trophic-level responses to hydrodynamic changes in the Everglades.This paper investigates the radial basis function multiquadric method of interpolation to obtain a continuous freshwater surface across the entire Everglades using radio-transmitted data from a network of water-level gages managed by the US Geological Survey (USGS), the South Florida Water Management District (SFWMD), and the Everglades National Park (ENP). Since the hydrological connection is interrupted by canals and levees across the study area, boundary conditions were simulated by linearly interpolating along those features and integrating the results together with the data from marsh stations to obtain a continuous water surface through multiquadric interpolation. The absolute cross-validation errors greater than 5 cm correlate well with the local outliers and the minimum distance between the closest stations within 2000-m radius, but seem to be independent of vegetation or season.

  7. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  8. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  9. Digital surfaces and thicknesses of selected hydrogeologic units of the Floridan aquifer system in Florida and parts of Georgia, Alabama, and South Carolina

    USGS Publications Warehouse

    Williams, Lester J.; Dixon, Joann F.

    2015-01-01

    Digital surfaces and thicknesses of selected hydrogeologic units of the Floridan aquifer system were developed to define an updated hydrogeologic framework as part of the U.S. Geological Survey Groundwater Resources Program. The dataset contains structural surfaces depicting the top and base of the aquifer system, its major and minor hydrogeologic units and zones, geophysical marker horizons, and the altitude of the 10,000-milligram-per-liter total dissolved solids boundary that defines the approximate fresh and saline parts of the aquifer system. The thicknesses of selected major and minor units or zones were determined by interpolating points of known thickness or from raster surface subtraction of the structural surfaces. Additional data contained include clipping polygons; regional polygon features that represent geologic or hydrogeologic aspects of the aquifers and the minor units or zones; data points used in the interpolation; and polygon and line features that represent faults, boundaries, and other features in the aquifer system.

  10. A solution to the surface intersection problem. [Boolean functions in geometric modeling

    NASA Technical Reports Server (NTRS)

    Timer, H. G.

    1977-01-01

    An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.

  11. Connecting Free Energy Surfaces in Implicit and Explicit Solvent: an Efficient Method to Compute Conformational and Solvation Free Energies

    PubMed Central

    Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.

    2015-01-01

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174

  12. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    PubMed

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  13. Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime

    DTIC Science & Technology

    2014-10-31

    molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we

  14. A projection method for coupling two-phase VOF and fluid structure interaction simulations

    NASA Astrophysics Data System (ADS)

    Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro

    2018-02-01

    The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.

  15. The ARM Best Estimate 2-dimensional Gridded Surface

    DOE Data Explorer

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  16. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  17. Daily air temperature interpolated at high spatial resolution over a large mountainous region

    USGS Publications Warehouse

    Dodson, R.; Marks, D.

    1997-01-01

    Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.

  18. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  19. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  20. Ab initio potential-energy surfaces for complex, multichannel systems using modified novelty sampling and feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.

    2005-02-01

    A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.

  1. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1984-01-01

    The objective of this investigation is to develop a state-of-the-art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies. A three-dimensional multivariate O/I analysis scheme has been developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  2. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    The development of a state of the art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies was investigated. A three dimensional multivariate O/I analysis scheme was developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  3. Probabilistic surface reconstruction from multiple data sets: An example for the Australian Moho

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Salmon, M.; Kennett, B. L. N.; Sambridge, M.

    2012-10-01

    Interpolation of spatial data is a widely used technique across the Earth sciences. For example, the thickness of the crust can be estimated by different active and passive seismic source surveys, and seismologists reconstruct the topography of the Moho by interpolating these different estimates. Although much research has been done on improving the quantity and quality of observations, the interpolation algorithms utilized often remain standard linear regression schemes, with three main weaknesses: (1) the level of structure in the surface, or smoothness, has to be predefined by the user; (2) different classes of measurements with varying and often poorly constrained uncertainties are used together, and hence it is difficult to give appropriate weight to different data types with standard algorithms; (3) there is typically no simple way to propagate uncertainties in the data to uncertainty in the estimated surface. Hence the situation can be expressed by Mackenzie (2004): "We use fantastic telescopes, the best physical models, and the best computers. The weak link in this chain is interpreting our data using 100 year old mathematics". Here we use recent developments made in Bayesian statistics and apply them to the problem of surface reconstruction. We show how the reversible jump Markov chain Monte Carlo (rj-McMC) algorithm can be used to let the degree of structure in the surface be directly determined by the data. The solution is described in probabilistic terms, allowing uncertainties to be fully accounted for. The method is illustrated with an application to Moho depth reconstruction in Australia.

  4. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  5. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  6. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  7. A fast and accurate dihedral interpolation loop subdivision scheme

    NASA Astrophysics Data System (ADS)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  8. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  9. Quantifying Groundwater Fluctuations in the Southern High Plains with GIS and Geostatistics

    NASA Astrophysics Data System (ADS)

    Whitehead, B.

    2008-12-01

    Groundwater as a dwindling non-renewable natural resource has been an important research theme in agricultural studies coupled with human-environment interaction. This research incorporated contemporary Geographic Information System (GIS) methodologies and a universal kriging interpolator (geostatistics) to develop depth to groundwater surfaces for the southern portion of the High Plains, or Ogallala, aquifer. The variations in the interpolated surfaces were used to calculate the volume of water mined from the aquifer from 1980 to 2005. The findings suggest a nearly inverse relationship to the water withdrawal scenarios derived by the United States Geological Survey (USGS) during the Regional Aquifer System Analysis (RASA) performed in the early 1980's. These results advocate further research into regional climate change, groundwater-surface water interaction, and recharge mechanisms in the region, and provide a substantial contribution to the continuing and contentious issue concerning the environmental sustainability of the High Plains.

  10. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    PubMed

    Liu, Wei; Du, Peijun; Wang, Dongchen

    2015-01-01

    One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  11. Development of the Navy’s Next-Generation Nonhydrostatic Modeling System

    DTIC Science & Technology

    2013-09-30

    e.g. surface roughness, land- sea mask, surface albedo ) are needed by physical parameterizations. The surface values will be read and interpolated...characteristics (e.g. albedo , surface roughness) is now available to the model during the initialization stage. We have added infrastructure to the...six faces (Fig 3). 4 Figure 3: Topography (top left, in meters), surface roughness (top right, in meters), albedo (bottom left, no units

  12. Multivariate optimum interpolation of surface pressure and winds over oceans

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.

    1984-01-01

    The observations of surface pressure are quite sparse over oceanic areas. An effort to improve the analysis of surface pressure over oceans through the development of a multivariate surface analysis scheme which makes use of surface pressure and wind data is discussed. Although the present research used ship winds, future versions of this analysis scheme could utilize winds from additional sources, such as satellite scatterometer data.

  13. Digital elevation modeling via curvature interpolation for lidar data

    USDA-ARS?s Scientific Manuscript database

    Digital elevation model (DEM) is a three-dimensional (3D) representation of a terrain's surface - for a planet (including Earth), moon, or asteroid - created from point cloud data which measure terrain elevation. Its modeling requires surface reconstruction for the scattered data, which is an ill-p...

  14. Kinetics of the hydrogen atom abstraction reactions from 1-butanol by hydroxyl radical: theory matches experiment and more.

    PubMed

    Seal, Prasenjit; Oyedepo, Gbenga; Truhlar, Donald G

    2013-01-17

    In the present work, we study the H atom abstraction reactions by hydroxyl radical at all five sites of 1-butanol. Multistructural variational transition state theory (MS-VTST) was employed to estimate the five thermal rate constants. MS-VTST utilizes a multifaceted dividing surface that accounts for the multiple conformational structures of the transition state, and we also include all the structures of the reactant molecule. The vibrational frequencies and minimum energy paths (MEPs) were computed using the M08-HX/MG3S electronic structure method. The required potential energy surfaces were obtained implicitly by direct dynamics employing interpolated variational transition state theory with mapping (IVTST-M) using a variational reaction path algorithm. The M08-HX/MG3S electronic model chemistry was then used to calculate multistructural torsional anharmonicity factors to complete the MS-VTST rate constant calculations. The results indicate that torsional anharmonicity is very important at higher temperatures, and neglecting it would lead to errors of 26 and 32 at 1000 and 1500 K, respectively. Our results for the sums of the site-specific rate constants agree very well with the experimental values of Hanson and co-workers at 896-1269 K and with the experimental results of Campbell et al. at 292 K, but slightly less well with the experiments of Wallington et al., Nelson et al., and Yujing and Mellouki at 253-372 K; nevertheless, the calculated rates are within a factor of 1.61 of all experimental values at all temperatures. This gives us confidence in the site-specific values, which are currently inaccessible to experiment.

  15. GroPBS: Fast Solver for Implicit Electrostatics of Biomolecules

    PubMed Central

    Bertelshofer, Franziska; Sun, Liping; Greiner, Günther; Böckmann, Rainer A.

    2015-01-01

    Knowledge about the electrostatic potential on the surface of biomolecules or biomembranes under physiological conditions is an important step in the attempt to characterize the physico-chemical properties of these molecules and, in particular, also their interactions with each other. Additionally, knowledge about solution electrostatics may also guide the design of molecules with specified properties. However, explicit water models come at a high computational cost, rendering them unsuitable for large design studies or for docking purposes. Implicit models with the water phase treated as a continuum require the numerical solution of the Poisson–Boltzmann equation (PBE). Here, we present a new flexible program for the numerical solution of the PBE, allowing for different geometries, and the explicit and implicit inclusion of membranes. It involves a discretization of space and the computation of the molecular surface. The PBE is solved using finite differences, the resulting set of equations is solved using a Gauss–Seidel method. It is shown for the example of the sucrose transporter ScrY that the implicit inclusion of a surrounding membrane has a strong effect also on the electrostatics within the pore region and, thus, needs to be carefully considered, e.g., in design studies on membrane proteins. PMID:26636074

  16. A method for deriving a 4D-interpolated balanced planning target for mobile tumor radiotherapy.

    PubMed

    Roland, Teboh; Hales, Russell; McNutt, Todd; Wong, John; Simari, Patricio; Tryggestad, Erik

    2012-01-01

    Tumor control and normal tissue toxicity are strongly correlated to the tumor and normal tissue volumes receiving high prescribed dose levels in the course of radiotherapy. Planning target definition is, therefore, crucial to ensure favorable clinical outcomes. This is especially important for stereotactic body radiation therapy of lung cancers, characterized by high fractional doses and steep dose gradients. The shift in recent years from population-based to patient-specific treatment margins, as facilitated by the emergence of 4D medical imaging capabilities, is a major improvement. The commonly used motion-encompassing, or internal-target volume (ITV), target definition approach provides a high likelihood of coverage for the mobile tumor but inevitably exposes healthy tissue to high prescribed dose levels. The goal of this work was to generate an interpolated balanced planning target that takes into account both tumor coverage and normal tissue sparing from high prescribed dose levels, thereby improving on the ITV approach. For each 4DCT dataset, 4D deformable image registration was used to derive two bounding targets, namely, a 4D-intersection and a 4D-composite target which minimized normal tissue exposure to high prescribed dose levels and maximized tumor coverage, respectively. Through definition of an "effective overlap volume histogram" the authors derived an "interpolated balanced planning target" intended to balance normal tissue sparing from prescribed doses with tumor coverage. To demonstrate the dosimetric efficacy of the interpolated balanced planning target, the authors performed 4D treatment planning based on deformable image registration of 4D-CT data for five previously treated lung cancer patients. Two 4D plans were generated per patient, one based on the interpolated balanced planning target and the other based on the conventional ITV target. Plans were compared for tumor coverage and the degree of normal tissue sparing resulting from the new approach was quantified. Analysis of the 4D dose distributions from all five patients showed that while achieving tumor coverage comparable to the ITV approach, the new planning target definition resulted in reductions of lung V(10), V(20), and V(30) of 6.3% ± 1.7%, 10.6% ± 3.9%, and 12.9% ± 5.5%, respectively, as well as reductions in mean lung dose, mean dose to the GTV-ring and mean heart dose of 8.8% ± 2.5%, 7.2% ± 2.5%, and 10.6% ± 3.6%, respectively. The authors have developed a simple and systematic approach to generate a 4D-interpolated balanced planning target volume that implicitly incorporates the dynamics of respiratory-organ motion without requiring 4D-dose computation or optimization. Preliminary results based on 4D-CT data of five previously treated lung patients showed that this new planning target approach may improve normal tissue sparing without sacrificing tumor coverage.

  17. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  18. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  19. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  20. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  1. Structure and Dynamics of Solvated Polymers near a Silica Surface: On the Different Roles Played by Solvent.

    PubMed

    Perrin, Elsa; Schoen, Martin; Coudert, François-Xavier; Boutin, Anne

    2018-04-26

    Whereas it is experimentally known that the inclusion of nanoparticles in hydrogels can lead to a mechanical reinforcement, a detailed molecular understanding of the adhesion mechanism is still lacking. Here we use coarse-grained molecular dynamics simulations to investigate the nature of the interface between silica surfaces and solvated polymers. We show how differences in the nature of the polymer and the polymer-solvent interactions can lead to drastically different behavior of the polymer-surface adhesion. Comparing explicit and implicit solvent models, we conclude that this effect cannot be fully described in an implicit solvent. We highlight the crucial role of polymer solvation for the adsorption of the polymer chain on the silica surface, the significant dynamics of polymer chains on the surface, and details of the modifications in the structure solvated polymer close to the interface.

  2. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  3. Software for C1 interpolation

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1977-01-01

    The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.

  4. Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo

    NASA Astrophysics Data System (ADS)

    Kosempa, M.; Chambers, D. P.

    2016-02-01

    Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?

  5. A case study of aerosol data assimilation with the Community Multi-scale Air Quality Model over the contiguous United States using 3D-Var and optimal interpolation methods

    NASA Astrophysics Data System (ADS)

    Tang, Youhua; Pagowski, Mariusz; Chai, Tianfeng; Pan, Li; Lee, Pius; Baker, Barry; Kumar, Rajesh; Delle Monache, Luca; Tong, Daniel; Kim, Hyun-Cheol

    2017-12-01

    This study applies the Gridpoint Statistical Interpolation (GSI) 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP), to improve surface PM2.5 predictions over the contiguous United States (CONUS) by assimilating aerosol optical depth (AOD) and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ) modeling system. An optimal interpolation (OI) method implemented earlier (Tang et al., 2015) for the CMAQ modeling system is also tested for the same period (July 2011) over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter < 2.5 µm) leads to stronger increments in surface PM2.5 compared to its MODIS AOD assimilation at the 550 nm wavelength. In contrast, we find a stronger OI impact of the MODIS AOD on surface aerosols at 18:00 UTC compared to the surface PM2.5 OI method. GSI produces smoother result and yields overall better correlation coefficient and root mean squared error (RMSE). It should be noted that the 3D-Var and OI methods used here have several big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.

  6. Patients with Parkinson's disease learn to control complex systems-an indication for intact implicit cognitive skill learning.

    PubMed

    Witt, Karsten; Daniels, Christine; Daniel, Victoria; Schmitt-Eliassen, Julia; Volkmann, Jens; Deuschl, Günther

    2006-01-01

    Implicit memory and learning mechanisms are composed of multiple processes and systems. Previous studies demonstrated a basal ganglia involvement in purely cognitive tasks that form stimulus response habits by reinforcement learning such as implicit classification learning. We will test the basal ganglia influence on two cognitive implicit tasks previously described by Berry and Broadbent, the sugar production task and the personal interaction task. Furthermore, we will investigate the relationship between certain aspects of an executive dysfunction and implicit learning. To this end, we have tested 22 Parkinsonian patients and 22 age-matched controls on two implicit cognitive tasks, in which participants learned to control a complex system. They interacted with the system by choosing an input value and obtaining an output that was related in a complex manner to the input. The objective was to reach and maintain a specific target value across trials (dynamic system learning). The two tasks followed the same underlying complex rule but had different surface appearances. Subsequently, participants performed an executive test battery including the Stroop test, verbal fluency and the Wisconsin card sorting test (WCST). The results demonstrate intact implicit learning in patients, despite an executive dysfunction in the Parkinsonian group. They lead to the conclusion that the basal ganglia system affected in Parkinson's disease does not contribute to the implicit acquisition of a new cognitive skill. Furthermore, the Parkinsonian patients were able to reach a specific goal in an implicit learning context despite impaired goal directed behaviour in the WCST, a classic test of executive functions. These results demonstrate a functional independence of implicit cognitive skill learning and certain aspects of executive functions.

  7. Cerebellar input configuration toward object model abstraction in manipulation tasks.

    PubMed

    Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo

    2011-08-01

    It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.

  8. Computations of spray, fuel-air mixing, and combustion in a lean-premixed-prevaporized combustor

    NASA Technical Reports Server (NTRS)

    Dasgupta, A.; Li, Z.; Shih, T. I.-P.; Kundu, K.; Deur, J. M.

    1993-01-01

    A code was developed for computing the multidimensional flow, spray, combustion, and pollutant formation inside gas turbine combustors. The code developed is based on a Lagrangian-Eulerian formulation and utilizes an implicit finite-volume method. The focus of this paper is on the spray part of the code (both formulation and algorithm), and a number of issues related to the computation of sprays and fuel-air mixing in a lean-premixed-prevaporized combustor. The issues addressed include: (1) how grid spacings affect the diffusion of evaporated fuel, and (2) how spurious modes can arise through modelling of the spray in the Lagrangian computations. An upwind interpolation scheme is proposed to account for some effects of grid spacing on the artificial diffusion of the evaporated fuel. Also, some guidelines are presented to minimize errors associated with the spurious modes.

  9. Numerical solution of the full potential equation using a chimera grid approach

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1995-01-01

    A numerical scheme utilizing a chimera zonal grid approach for solving the full potential equation in two spatial dimensions is described. Within each grid zone a fully-implicit approximate factorization scheme is used to advance the solution one interaction. This is followed by the explicit advance of all common zonal grid boundaries using a bilinear interpolation of the velocity potential. The presentation is highlighted with numerical results simulating the flow about a two-dimensional, nonlifting, circular cylinder. For this problem, the flow domain is divided into two parts: an inner portion covered by a polar grid and an outer portion covered by a Cartesian grid. Both incompressible and compressible (transonic) flow solutions are included. Comparisons made with an analytic solution as well as single grid results indicate that the chimera zonal grid approach is a viable technique for solving the full potential equation.

  10. An efficient technique for the numerical solution of the bidomain equations.

    PubMed

    Whiteley, Jonathan P

    2008-08-01

    Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.

  11. Nearly arc-length tool path generation and tool radius compensation algorithm research in FTS turning

    NASA Astrophysics Data System (ADS)

    Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao

    2014-08-01

    In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.

  12. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  13. Distributed snow modeling suitable for use with operational data for the American River watershed.

    NASA Astrophysics Data System (ADS)

    Shamir, E.; Georgakakos, K. P.

    2004-12-01

    The mountainous terrain of the American River watershed (~4300 km2) at the Western slope of the Northern Sierra Nevada is subject to significant variability in the atmospheric forcing that controls the snow accumulation and ablations processes (i.e., precipitation, surface temperature, and radiation). For a hydrologic model that attempts to predict both short- and long-term streamflow discharges, a plausible description of the seasonal and intermittent winter snow pack accumulation and ablation is crucial. At present the NWS-CNRFC operational snow model is implemented in a semi distributed manner (modeling unit of about 100-1000 km2) and therefore lump distinct spatial variability of snow processes. In this study we attempt to account for the precipitation, temperature, and radiation spatial variability by constructing a distributed snow accumulation and melting model suitable for use with commonly available sparse data. An adaptation of the NWS-Snow17 energy and mass balance that is used operationally at the NWS River Forecast Centers is implemented at 1 km2 grid cells with distributed input and model parameters. The input to the model (i.e., precipitation and surface temperature) is interpolated from observed point data. The surface temperature was interpolated over the basin based on adiabatic lapse rates using topographic information whereas the precipitation was interpolated based on maps of climatic mean annual rainfall distribution acquired from PRISM. The model parameters that control the melting rate due to radiation were interpolated based on aspect. The study was conducted for the entire American basin for the snow seasons of 1999-2000. Validation of the Snow Water Equivalent (SWE) prediction is done by comparing to observation from 12 snow Sensors. The Snow Cover Area (SCA) prediction was evaluated by comparing to remotely sensed 500m daily snow cover derived from MODIS. The results that the distribution of snow over the area is well captured and the quantity compared to the snow gauges are well estimated in the high elevation.

  14. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  15. Quantifying Libya-4 Surface Reflectance Heterogeneity With WorldView-1, 2 and EO-1 Hyperion

    NASA Technical Reports Server (NTRS)

    Neigh, Christopher S. R.; McCorkel, Joel; Middleton, Elizabeth M.

    2015-01-01

    The land surface imaging (LSI) virtual constellation approach promotes the concept of increasing Earth observations from multiple but disparate satellites. We evaluated this through spectral and spatial domains, by comparing surface reflectance from 30-m Hyperion and 2-m resolution WorldView-2 (WV-2) data in the Libya-4 pseudoinvariant calibration site. We convolved and resampled Hyperion to WV-2 bands using both cubic convolution and nearest neighbor (NN) interpolation. Additionally, WV-2 and WV-1 same-date imagery were processed as a cross-track stereo pair to generate a digital terrain model to evaluate the effects from large (>70 m) linear dunes. Agreement was moderate to low on dune peaks between WV-2 and Hyperion (R2 <; 0.4) but higher in areas of lower elevation and slope (R2 > 0.6). Our results provide a satellite sensor intercomparison protocol for an LSI virtual constellation at high spatial resolution, which should start with geolocation of pixels, followed by NN interpolation to avoid tall dunes that enhance surface reflectance differences across this internationally utilized site.

  16. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    NASA Astrophysics Data System (ADS)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  17. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  18. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  19. Facet personality and surface-level diversity as team mental model antecedents: implications for implicit coordination.

    PubMed

    Fisher, David M; Bell, Suzanne T; Dierdorff, Erich C; Belohlav, James A

    2012-07-01

    Team mental models (TMMs) have received much attention as important drivers of effective team processes and performance. Less is known about the factors that give rise to these shared cognitive structures. We examined potential antecedents of TMMs, with a specific focus on team composition variables, including various facets of personality and surface-level diversity. Further, we examined implicit coordination as an important outcome of TMMs. Results suggest that team composition in terms of the cooperation facet of agreeableness and racial diversity were significantly related to team-focused TMM similarity. TMM similarity was also positively predictive of implicit coordination, which mediated the relationship between TMM similarity and team performance. Post hoc analyses revealed a significant interaction between the trust facet of agreeableness and racial diversity in predicting TMM similarity. Results are discussed in terms of facilitating the emergence of TMMs and corresponding implications for team-related human resource practices. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  20. Assessing implicit models for nonpolar mean solvation forces: The importance of dispersion and volume terms

    PubMed Central

    Wagoner, Jason A.; Baker, Nathan A.

    2006-01-01

    Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675

  1. Collection, processing and error analysis of Terrestrial Laser Scanning data from fluvial gravel surfaces

    NASA Astrophysics Data System (ADS)

    Hodge, R.; Brasington, J.; Richards, K.

    2009-04-01

    The ability to collect 3D elevation data at mm-resolution from in-situ natural surfaces, such as fluvial and coastal sediments, rock surfaces, soils and dunes, is beneficial for a range of geomorphological and geological research. From these data the properties of the surface can be measured, and Digital Terrain Models (DTM) can be constructed. Terrestrial Laser Scanning (TLS) can collect quickly such 3D data with mm-precision and mm-spacing. This paper presents a methodology for the collection and processing of such TLS data, and considers how the errors in this TLS data can be quantified. TLS has been used to collect elevation data from fluvial gravel surfaces. Data were collected from areas of approximately 1 m2, with median grain sizes ranging from 18 to 63 mm. Errors are inherent in such data as a result of the precision of the TLS, and the interaction of factors including laser footprint, surface topography, surface reflectivity and scanning geometry. The methodology for the collection and processing of TLS data from complex surfaces like these fluvial sediments aims to minimise the occurrence of, and remove, such errors. The methodology incorporates taking scans from multiple scanner locations, averaging repeat scans, and applying a series of filters to remove erroneous points. Analysis of 2.5D DTMs interpolated from the processed data has identified geomorphic properties of the gravel surfaces, including the distribution of surface elevations, preferential grain orientation and grain imbrication. However, validation of the data and interpolated DTMs is limited by the availability of techniques capable of collecting independent elevation data of comparable quality. Instead, two alternative approaches to data validation are presented. The first consists of careful internal validation to optimise filter parameter values during data processing combined with a series of laboratory experiments. In the experiments, TLS data were collected from a sphere and planes with different reflectivities to measure the accuracy and precision of TLS data of these geometrically simple objects. Whilst this first approach allows the maximum precision of TLS data from complex surfaces to be estimated, it cannot quantify the distribution of errors within the TLS data and across the interpolated DTMs. The second approach enables this by simulating the collection of TLS data from complex surfaces of a known geometry. This simulated scanning has been verified through systematic comparison with laboratory TLS data. Two types of surface geometry have been investigated: simulated regular arrays of uniform spheres used to analyse the effect of sphere size; and irregular beds of spheres with the same grain size distribution as the fluvial gravels, which provide a comparable complex geometry to the field sediment surfaces. A series of simulated scans of these surfaces has enabled the magnitude and spatial distribution of errors in the interpolated DTMs to be quantified, as well as demonstrating the utility of the different processing stages in removing errors from TLS data. As well as demonstrating the application of simulated scanning as a technique to quantify errors, these results can be used to estimate errors in comparable TLS data.

  2. BOREAS Derived Surface Meteorological Data

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Twine, Tracy; Rinker, Donald; Knapp, David

    2000-01-01

    In 1995, the BOREAS science teams identified the need for a continuous surface meteorological and radiation data set to support flux and surface process modeling efforts. This data set contains actual, substituted, and interpolated 15-minute meteorological and radiation data compiled from several surface measurements sites over the BOREAS SSA and NSA. Temporally, the data cover 01-Jan-1994 to 31-Dec-1996. The data are stored in tabular ASCII files, and are classified as AFM-Staff data.

  3. Use of geostatistics for remediation planning to transcend urban political boundaries.

    PubMed

    Milillo, Tammy M; Sinha, Gaurav; Gardella, Joseph A

    2012-11-01

    Soil remediation plans are often dictated by areas of jurisdiction or property lines instead of scientific information. This study exemplifies how geostatistically interpolated surfaces can substantially improve remediation planning. Ordinary kriging, ordinary co-kriging, and inverse distance weighting spatial interpolation methods were compared for analyzing surface and sub-surface soil sample data originally collected by the US EPA and researchers at the University at Buffalo in Hickory Woods, an industrial-residential neighborhood in Buffalo, NY, where both lead and arsenic contamination is present. Past clean-up efforts estimated contamination levels from point samples, but parcel and agency jurisdiction boundaries were used to define remediation sites, rather than geostatistical models estimating the spatial behavior of the contaminants in the soil. Residents were understandably dissatisfied with the arbitrariness of the remediation plan. In this study we show how geostatistical mapping and participatory assessment can make soil remediation scientifically defensible, socially acceptable, and economically feasible. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. A Thermo-Optic Propagation Modeling Capability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developedmore » for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.« less

  5. Fast Shepard interpolation on graphics processing units: potential energy surfaces and dynamics for H + CH4 → H2 + CH3.

    PubMed

    Welsch, Ralph; Manthe, Uwe

    2013-04-28

    A strategy for the fast evaluation of Shepard interpolated potential energy surfaces (PESs) utilizing graphics processing units (GPUs) is presented. Speed ups of several orders of magnitude are gained for the title reaction on the ZFWCZ PES [Y. Zhou, B. Fu, C. Wang, M. A. Collins, and D. H. Zhang, J. Chem. Phys. 134, 064323 (2011)]. Thermal rate constants are calculated employing the quantum transition state concept and the multi-layer multi-configurational time-dependent Hartree approach. Results for the ZFWCZ PES are compared to rate constants obtained for other ab initio PESs and problems are discussed. A revised PES is presented. Thermal rate constants obtained for the revised PES indicate that an accurate description of the anharmonicity around the transition state is crucial.

  6. The cancellous bone multiscale morphology-elasticity relationship.

    PubMed

    Agić, Ante; Nikolić, Vasilije; Mijović, Budimir

    2006-06-01

    The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.

  7. Multilevel Green's function interpolation method for scattering from composite metallic and dielectric objects.

    PubMed

    Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou

    2008-10-01

    A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).

  8. Spatial Interpolation of Reference Evapotranspiration in India: Comparison of IDW and Kriging Methods

    NASA Astrophysics Data System (ADS)

    Hodam, Sanayanbi; Sarkar, Sajal; Marak, Areor G. R.; Bandyopadhyay, A.; Bhadra, A.

    2017-12-01

    In the present study, to understand the spatial distribution characteristics of the ETo over India, spatial interpolation was performed on the means of 32 years (1971-2002) monthly data of 131 India Meteorological Department stations uniformly distributed over the country by two methods, namely, inverse distance weighted (IDW) interpolation and kriging. Kriging was found to be better while developing the monthly surfaces during cross-validation. However, in station-wise validation, IDW performed better than kriging in almost all the cases, hence is recommended for spatial interpolation of ETo and its governing meteorological parameters. This study also checked if direct kriging of FAO-56 Penman-Monteith (PM) (Allen et al. in Crop evapotranspiration—guidelines for computing crop water requirements, Irrigation and drainage paper 56, Food and Agriculture Organization of the United Nations (FAO), Rome, 1998) point ETo produced comparable results against ETo estimated with individually kriged weather parameters (indirect kriging). Indirect kriging performed marginally well compared to direct kriging. Point ETo values were extended to areal ETo values by IDW and FAO-56 PM mean ETo maps for India were developed to obtain sufficiently accurate ETo estimates at unknown locations.

  9. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    NASA Astrophysics Data System (ADS)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.

  10. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  11. Estimated Depth to Ground Water and Configuration of the Water Table in the Portland, Oregon Area

    USGS Publications Warehouse

    Snyder, Daniel T.

    2008-01-01

    Reliable information on the configuration of the water table in the Portland metropolitan area is needed to address concerns about various water-resource issues, especially with regard to potential effects from stormwater injection systems such as UIC (underground injection control) systems that are either existing or planned. To help address these concerns, this report presents the estimated depth-to-water and water-table elevation maps for the Portland area, along with estimates of the relative uncertainty of the maps and seasonal water-table fluctuations. The method of analysis used to determine the water-table configuration in the Portland area relied on water-level data from shallow wells and surface-water features that are representative of the water table. However, the largest source of available well data is water-level measurements in reports filed by well constructors at the time of new well installation, but these data frequently were not representative of static water-level conditions. Depth-to-water measurements reported in well-construction records generally were shallower than measurements by the U.S. Geological Survey (USGS) in the same or nearby wells, although many depth-to-water measurements were substantially deeper than USGS measurements. Magnitudes of differences in depth-to-water measurements reported in well records and those measured by the USGS in the same or nearby wells ranged from -119 to 156 feet with a mean of the absolute value of the differences of 36 feet. One possible cause for the differences is that water levels in many wells reported in well records were not at equilibrium at the time of measurement. As a result, the analysis of the water-table configuration relied on water levels measured during the current study or used in previous USGS investigations in the Portland area. Because of the scarcity of well data in some areas, the locations of select surface-water features including major rivers, streams, lakes, wetlands, and springs representative of where the water table is at land surface were used to augment the analysis. Ground-water and surface-water data were combined for use in interpolation of the water-table configuration. Interpolation of the two representations typically used to define water-table position - depth to the water table below land surface and elevation of the water table above a datum - can produce substantially different results and may represent the end members of a spectrum of possible interpolations largely determined by the quantity of recharge and the hydraulic properties of the aquifer. Datasets of depth-to-water and water-table elevation for the current study were interpolated independently based on kriging as the method of interpolation with parameters determined through the use of semivariograms developed individually for each dataset. Resulting interpolations were then combined to create a single, averaged representation of the water-table configuration. Kriging analysis also was used to develop a map of relative uncertainty associated with the values of the water-table position. Accuracy of the depth-to-water and water-table elevation maps is dependent on various factors and assumptions pertaining to the data, the method of interpolation, and the hydrogeologic conditions of the surficial aquifers in the study area. Although the water-table configuration maps generally are representative of the conditions in the study area, the actual position of the water-table may differ from the estimated position at site-specific locations, and short-term, seasonal, and long-term variations in the differences also can be expected. The relative uncertainty map addresses some but not all possible errors associated with the analysis of the water-table configuration and does not depict all sources of uncertainty. Depth to water greater than 300 feet in the Portland area is limited to parts of the Tualatin Mountains, the foothills of the Cascade Range, and muc

  12. An adaptive multi-moment FVM approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Hu, Changhong

    2018-04-01

    In this study, a multi-moment finite volume method (FVM) based on block-structured adaptive Cartesian mesh is proposed for simulating incompressible flows. A conservative interpolation scheme following the idea of the constrained interpolation profile (CIP) method is proposed for the prolongation operation of the newly created mesh. A sharp immersed boundary (IB) method is used to model the immersed rigid body. A moving least squares (MLS) interpolation approach is applied for reconstruction of the velocity field around the solid surface. An efficient method for discretization of Laplacian operators on adaptive meshes is proposed. Numerical simulations on several test cases are carried out for validation of the proposed method. For the case of viscous flow past an impulsively started cylinder (Re = 3000 , 9500), the computed surface vorticity coincides with the result of the body-fitted method. For the case of a fast pitching NACA 0015 airfoil at moderate Reynolds numbers (Re = 10000 , 45000), the predicted drag coefficient (CD) and lift coefficient (CL) agree well with other numerical or experimental results. For 2D and 3D simulations of viscous flow past a pitching plate with prescribed motions (Re = 5000 , 40000), the predicted CD, CL and CM (moment coefficient) are in good agreement with those obtained by other numerical methods.

  13. Surface filling-in and contour interpolation contribute independently to Kanizsa figure formation.

    PubMed

    Chen, Siyi; Glasauer, Stefan; Müller, Hermann J; Conci, Markus

    2018-04-30

    To explore mechanisms of object integration, the present experiments examined how completion of illusory contours and surfaces modulates the sensitivity of localizing a target probe. Observers had to judge whether a briefly presented dot probe was located inside or outside the region demarcated by inducer elements that grouped to form variants of an illusory, Kanizsa-type figure. From the resulting psychometric functions, we determined observers' discrimination thresholds as a sensitivity measure. Experiment 1 showed that sensitivity was systematically modulated by the amount of surface and contour completion afforded by a given configuration. Experiments 2 and 3 presented stimulus variants that induced an (occluded) object without clearly defined bounding contours, which gave rise to a relative sensitivity increase for surface variations on their own. Experiments 4 and 5 were performed to rule out that these performance modulations were simply attributable to variable distances between critical local inducers or to costs in processing an interrupted contour. Collectively, the findings provide evidence for a dissociation between surface and contour processing, supporting a model of object integration in which completion is instantiated by feedforward processing that independently renders surface filling-in and contour interpolation and a feedback loop that integrates these outputs into a complete whole. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. [Spatial pattern of land surface dead combustible fuel load in Huzhong forest area in Great Xing'an Mountains].

    PubMed

    Liu, Zhi-Hua; Chang, Yu; Chen, Hong-Wei; Zhou, Rui; Jing, Guo-Zhi; Zhang, Hong-Xin; Zhang, Chang-Meng

    2008-03-01

    By using geo-statistics and based on time-lag classification standard, a comparative study was made on the land surface dead combustible fuels in Huzhong forest area in Great Xing'an Mountains. The results indicated that the first level land surface dead combustible fuel, i. e., 1 h time-lag dead fuel, presented stronger spatial auto-correlation, with an average of 762.35 g x m(-2) and contributing to 55.54% of the total load. Its determining factors were species composition and stand age. The second and third levels land surface dead combustible fuel, i. e., 10 h and 100 h time-lag dead fuels, had a sum of 610.26 g x m(-2), and presented weaker spatial auto-correlation than 1 h time-lag dead fuel. Their determining factor was the disturbance history of forest stand. The complexity and heterogeneity of the factors determining the quality and quantity of forest land surface dead combustible fuels were the main reasons for the relatively inaccurate interpolation. However, the utilization of field survey data coupled with geo-statistics could easily and accurately interpolate the spatial pattern of forest land surface dead combustible fuel loads, and indirectly provide a practical basis for forest management.

  15. Multivariate optimum interpolation of surface pressure and surface wind over oceans

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.; Baker, W. E.; Nestler, M. S.

    1984-01-01

    The present multivariate analysis method for surface pressure and winds incorporates ship wind observations into the analysis of surface pressure. For the specific case of 0000 GMT, on February 3, 1979, the additional data resulted in a global rms difference of 0.6 mb; individual maxima as larse as 5 mb occurred over the North Atlantic and East Pacific Oceans. These differences are noted to be smaller than the analysis increments to the first-guess fields.

  16. Heat waves measured with MODIS land surface temperature data predict changes in avian community structure

    Treesearch

    Thomas P. Albright; Anna M. Pidgeon; Chadwick D. Rittenhouse; Murray K. Clayton; Curtis H. Flather; Patrick D. Culbert; Volker C. Radeloff

    2011-01-01

    Heat waves are expected to become more frequent and severe as climate changes, with unknown consequences for biodiversity. We sought to identify ecologically-relevant broad-scale indicators of heat waves based on MODIS land surface temperature (LST) and interpolated air temperature data and assess their associations with avian community structure. Specifically, we...

  17. Spinal pedicle screw planning using deformable atlas registration

    NASA Astrophysics Data System (ADS)

    Goerres, J.; Uneri, A.; De Silva, T.; Ketcha, M.; Reaungamornrat, S.; Jacobson, M.; Vogt, S.; Kleinszig, G.; Osgood, G.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2017-04-01

    Spinal screw placement is a challenging task due to small bone corridors and high risk of neurological or vascular complications, benefiting from precision guidance/navigation and quality assurance (QA). Implicit to both guidance and QA is the definition of a surgical plan—i.e. the desired trajectories and device selection for target vertebrae—conventionally requiring time-consuming manual annotations by a skilled surgeon. We propose automation of such planning by deriving the pedicle trajectory and device selection from a patient’s preoperative CT or MRI. An atlas of vertebrae surfaces was created to provide the underlying basis for automatic planning—in this work, comprising 40 exemplary vertebrae at three levels of the spine (T7, T8, and L3). The atlas was enriched with ideal trajectory annotations for 60 pedicles in total. To define trajectories for a given patient, sparse deformation fields from the atlas surfaces to the input (CT or MR image) are applied on the annotated trajectories. Mean value coordinates are used to interpolate dense deformation fields. The pose of a straight trajectory is optimized by image-based registration to an accumulated volume of the deformed annotations. For evaluation, input deformation fields were created using coherent point drift (CPD) to perform a leave-one-out analysis over the atlas surfaces. CPD registration demonstrated surface error of 0.89  ±  0.10 mm (median  ±  interquartile range) for T7/T8 and 1.29  ±  0.15 mm for L3. At the pedicle center, registered trajectories deviated from the expert reference by 0.56  ±  0.63 mm (T7/T8) and 1.12  ±  0.67 mm (L3). The predicted maximum screw diameter differed by 0.45  ±  0.62 mm (T7/T8), and 1.26  ±  1.19 mm (L3). The automated planning method avoided screw collisions in all cases and demonstrated close agreement overall with expert reference plans, offering a potentially valuable tool in support of surgical guidance and QA.

  18. Spinal pedicle screw planning using deformable atlas registration.

    PubMed

    Goerres, J; Uneri, A; De Silva, T; Ketcha, M; Reaungamornrat, S; Jacobson, M; Vogt, S; Kleinszig, G; Osgood, G; Wolinsky, J-P; Siewerdsen, J H

    2017-04-07

    Spinal screw placement is a challenging task due to small bone corridors and high risk of neurological or vascular complications, benefiting from precision guidance/navigation and quality assurance (QA). Implicit to both guidance and QA is the definition of a surgical plan-i.e. the desired trajectories and device selection for target vertebrae-conventionally requiring time-consuming manual annotations by a skilled surgeon. We propose automation of such planning by deriving the pedicle trajectory and device selection from a patient's preoperative CT or MRI. An atlas of vertebrae surfaces was created to provide the underlying basis for automatic planning-in this work, comprising 40 exemplary vertebrae at three levels of the spine (T7, T8, and L3). The atlas was enriched with ideal trajectory annotations for 60 pedicles in total. To define trajectories for a given patient, sparse deformation fields from the atlas surfaces to the input (CT or MR image) are applied on the annotated trajectories. Mean value coordinates are used to interpolate dense deformation fields. The pose of a straight trajectory is optimized by image-based registration to an accumulated volume of the deformed annotations. For evaluation, input deformation fields were created using coherent point drift (CPD) to perform a leave-one-out analysis over the atlas surfaces. CPD registration demonstrated surface error of 0.89  ±  0.10 mm (median  ±  interquartile range) for T7/T8 and 1.29  ±  0.15 mm for L3. At the pedicle center, registered trajectories deviated from the expert reference by 0.56  ±  0.63 mm (T7/T8) and 1.12  ±  0.67 mm (L3). The predicted maximum screw diameter differed by 0.45  ±  0.62 mm (T7/T8), and 1.26  ±  1.19 mm (L3). The automated planning method avoided screw collisions in all cases and demonstrated close agreement overall with expert reference plans, offering a potentially valuable tool in support of surgical guidance and QA.

  19. Gradient-based multiconfiguration Shepard interpolation for generating potential energy surfaces for polyatomic reactions.

    PubMed

    Tishchenko, Oksana; Truhlar, Donald G

    2010-02-28

    This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.

  20. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  1. Grundwasserfließgeschehen Mecklenburg-Vorpommerns - Geohydraulische Modellierung mit Detrended Kriging

    NASA Astrophysics Data System (ADS)

    Hilgert, Toralf; Hennig, Heiko

    2017-03-01

    Groundwater heads were mapped for the entire State of Mecklenburg-Western Pomerania by applying a Detrended Kriging method based on a numerical geohydraulic model. The general groundwater flow system (trend surface) was represented by a two-dimensional horizontal flow model. Thus deviations of observed groundwater heads from simulated groundwater heads are no longer subject to a regional trend and can be interpolated by means of Ordinary Kriging. Subsequently, the groundwater heads were obtained from the sum of the simulated trend surface and interpolated residuals. Furthermore, the described procedure allowed a plausibility check of observed groundwater heads by comparing them to results of the hydraulic model. If significant deviations were seen, the observation wells could be allocated to different aquifers. The final results are two hydraulically established groundwater head distributions - one for the regional main aquifer and one for the upper aquifer which may differ locally from the main aquifer.

  2. Synthesis of generalized surface plasmon beams

    NASA Astrophysics Data System (ADS)

    Martinez-Niconoff, G.; Munoz-Lopez, J.; Martinez-Vara, P.

    2009-08-01

    Surface plasmon modes can be considered as the analogous to plane waves for homogeneous media. The extension to partially coherent surface plasmon beams is obtained by means of the incoherent superposition of the interference between surface plasmon modes whose profile is controlled associating a probability density function to the structural parameters implicit in their representation. We show computational simulations for cosine, Bessel, gaussian and dark hollow surface plasmon beams.

  3. Cones in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Hantzsche, W.; Wendt, H.

    1947-01-01

    In the case of cones in axially symmetric flow of supersonic velocity, adiabatic compression takes place between shock wave and surface of the cone. Interpolation curves betwen shock polars and the surface are therefore necessary for the complete understanding of this type of flow. They are given in the present report by graphical-numerical integration of the differential equation for all cone angles and airspeeds.

  4. Impacts of urban and industrial development on Arctic land surface temperature in Lower Yenisei River Region.

    NASA Astrophysics Data System (ADS)

    Li, Z.; Shiklomanov, N. I.

    2015-12-01

    Urbanization and industrial development have significant impacts on arctic climate that in turn controls settlement patterns and socio-economic processes. In this study we have analyzed the anthropogenic influences on regional land surface temperature of Lower Yenisei River Region of the Russia Arctic. The study area covers two consecutive Landsat scenes and includes three major cities: Norilsk, Igarka and Dudingka. Norilsk industrial region is the largest producer of nickel and palladium in the world, and Igarka and Dudingka are important ports for shipping. We constructed a spatio-temporal interpolated temperature model by including 1km MODIS LST, field-measured climate, Modern Era Retrospective-analysis for Research and Applications (MERRA), DEM, Landsat NDVI and Landsat Land Cover. Those fore-mentioned spatial data have various resolution and coverage in both time and space. We analyzed their relationships and created a monthly spatio-temporal interpolated surface temperature model at 1km resolution from 1980 to 2010. The temperature model then was used to examine the characteristic seasonal LST signatures, related to several representative assemblages of Arctic urban and industrial infrastructure in order to quantify anthropogenic influence on regional surface temperature.

  5. Analysis of rainfall distribution in Kelantan river basin, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Ros, Faizah; Tosaka, Hiroyuki

    2018-03-01

    Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.

  6. Spatial interpolation of pesticide drift from hand-held knapsack sprayers used in potato production

    NASA Astrophysics Data System (ADS)

    Garcia-Santos, Glenda; Pleschberger, Martin; Scheiber, Michael; Pilz, Jürgen

    2017-04-01

    Tropical mountainous regions in developing countries are often neglected in research and policy but represent key areas to be considered if sustainable agricultural and rural development is to be promoted. One example is the lack of information of pesticide drift soil deposition, which can support pesticide risk assessment for soil, surface water, bystanders and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, were obtained within one of the highest potato producing regions in Colombia. Based on the empirical data, different spatial interpolation techniques i.e. Thiessen, inverse distance squared weighting, co-kriging, pair-copulas and drift curves depending on distance and wind speed were tested and optimized. Results of the best performing spatial interpolation methods, suitable curves to assess mean relative drift and implications on risk assessment studies will be presented.

  7. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  8. Effects of timbre and tempo change on memory for music.

    PubMed

    Halpern, Andrea R; Müllensiefen, Daniel

    2008-09-01

    We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.

  9. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  10. Numerical solution of transport equation for applications in environmental hydraulics and hydrology

    NASA Astrophysics Data System (ADS)

    Rashidul Islam, M.; Hanif Chaudhry, M.

    1997-04-01

    The advective term in the one-dimensional transport equation, when numerically discretized, produces artificial diffusion. To minimize such artificial diffusion, which vanishes only for Courant number equal to unity, transport owing to advection has been modeled separately. The numerical solution of the advection equation for a Gaussian initial distribution is well established; however, large oscillations are observed when applied to an initial distribution with sleep gradients, such as trapezoidal distribution of a constituent or propagation of mass from a continuous input. In this study, the application of seven finite-difference schemes and one polynomial interpolation scheme is investigated to solve the transport equation for both Gaussian and non-Gaussian (trapezoidal) initial distributions. The results obtained from the numerical schemes are compared with the exact solutions. A constant advective velocity is assumed throughout the transport process. For a Gaussian distribution initial condition, all eight schemes give excellent results, except the Lax scheme which is diffusive. In application to the trapezoidal initial distribution, explicit finite-difference schemes prove to be superior to implicit finite-difference schemes because the latter produce large numerical oscillations near the steep gradients. The Warming-Kutler-Lomax (WKL) explicit scheme is found to be better among this group. The Hermite polynomial interpolation scheme yields the best result for a trapezoidal distribution among all eight schemes investigated. The second-order accurate schemes are sufficiently accurate for most practical problems, but the solution of unusual problems (concentration with steep gradient) requires the application of higher-order (e.g. third- and fourth-order) accurate schemes.

  11. Bound state potential energy surface construction: ab initio zero-point energies and vibrationally averaged rotational constants.

    PubMed

    Bettens, Ryan P A

    2003-01-15

    Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.

  12. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  13. WRF-Fire: coupled weather-wildland fire modeling with the weather research and forecasting model

    Treesearch

    Janice L. Coen; Marques Cameron; John Michalakes; Edward G. Patton; Philip J. Riggan; Kara M. Yedinak

    2012-01-01

    A wildland fire behavior module (WRF-Fire) was integrated into the Weather Research and Forecasting (WRF) public domain numerical weather prediction model. The fire module is a surface fire behavior model that is two-way coupled with the atmospheric model. Near-surface winds from the atmospheric model are interpolated to a finer fire grid and used, with fuel properties...

  14. Elliptic surface grid generation in three-dimensional space

    NASA Technical Reports Server (NTRS)

    Kania, Lee

    1992-01-01

    A methodology for surface grid generation in three dimensional space is described. The method solves a Poisson equation for each coordinate on arbitrary surfaces using successive line over-relaxation. The complete surface curvature terms were discretized and retained within the nonhomogeneous term in order to preserve surface definition; there is no need for conventional surface splines. Control functions were formulated to permit control of grid orthogonality and spacing. A method for interpolation of control functions into the domain was devised which permits their specification not only at the surface boundaries but within the interior as well. An interactive surface generation code which makes use of this methodology is currently under development.

  15. Surface Passivation in Empirical Tight Binding

    NASA Astrophysics Data System (ADS)

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2016-03-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.

  16. A two-dimensional numerical study of the flow inside the combustion chambers of a motored rotary engine

    NASA Technical Reports Server (NTRS)

    Shih, T. I. P.; Yang, S. L.; Schock, H. J.

    1986-01-01

    A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.

  17. A two-dimensional numerical study of the flow inside the combustion chamber of a motored rotary engine

    NASA Technical Reports Server (NTRS)

    Shih, T. I-P.; Yang, S. L.; Schock, H. J.

    1986-01-01

    A numerical study was performed to investigate the unsteady, multidimensional flow inside the combustion chambers of an idealized, two-dimensional, rotary engine under motored conditions. The numerical study was based on the time-dependent, two-dimensional, density-weighted, ensemble-averaged conservation equations of mass, species, momentum, and total energy valid for two-component ideal gas mixtures. The ensemble-averaged conservation equations were closed by a K-epsilon model of turbulence. This K-epsilon model of turbulence was modified to account for some of the effects of compressibility, streamline curvature, low-Reynolds number, and preferential stress dissipation. Numerical solutions to the conservation equations were obtained by the highly efficient implicit-factored method of Beam and Warming. The grid system needed to obtain solutions were generated by an algebraic grid generation technique based on transfinite interpolation. Results of the numerical study are presented in graphical form illustrating the flow patterns during intake, compression, gaseous fuel injection, expansion, and exhaust.

  18. Multiple burn fuel-optimal orbit transfers: Numerical trajectory computation and neighboring optimal feedback guidance

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.

    1995-01-01

    This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.

  19. A Modified Kriging Method to Interpolate the Soil Moisture Measured by Wireless Sensor Network with the Aid of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Liu, Q.; Li, X.; Niu, H.; Cai, E.

    2015-12-01

    In recent years, wireless sensor network (WSN) emerges to collect Earth observation data at relatively low cost and light labor load, while its observations are still point-data. To learn the spatial distribution of a land surface parameter, interpolating the point data is necessary. Taking soil moisture (SM) for example, its spatial distribution is critical information for agriculture management, hydrological and ecological researches. This study developed a method to interpolate the WSN-measured SM to acquire the spatial distribution in a 5km*5km study area, located in the middle reaches of HEIHE River, western China. As SM is related to many factors such as topology, soil type, vegetation and etc., even the WSN observation grid is not dense enough to reflect the SM distribution pattern. Our idea is to revise the traditional Kriging algorithm, introducing spectral variables, i.e., vegetation index (VI) and abledo, from satellite imagery as supplementary information to aid the interpolation. Thus, the new Extended-Kriging algorithm operates on the spatial & spectral combined space. To run the algorithm, first we need to estimate the SM variance function, which is also extended to the combined space. As the number of WSN samples in the study area is not enough to gather robust statistics, we have to assume that the SM variance function is invariant over time. So, the variance function is estimated from a SM map, derived from the airborne CASI/TASI images acquired in July 10, 2012, and then applied to interpolate WSN data in that season. Data analysis indicates that the new algorithm can provide more details to the variation of land SM. Then, the Leave-one-out cross-validation is adopted to estimate the interpolation accuracy. Although a reasonable accuracy can be achieved, the result is not yet satisfactory. Besides improving the algorithm, the uncertainties in WSN measurements may also need to be controlled in our further work.

  20. Mapping Atmospheric Moisture Climatologies across the Conterminous United States

    PubMed Central

    Daly, Christopher; Smith, Joseph I.; Olson, Keith V.

    2015-01-01

    Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026

  1. Apparatus and method for measuring the thickness of a coating

    DOEpatents

    Carlson, Nancy M.; Johnson, John A.; Tow, David M.; Walter, John B

    2002-01-01

    An apparatus and method for measuring the thickness of a coating adhered to a substrate. An electromagnetic acoustic transducer is used to induce surface waves into the coating. The surface waves have a selected frequency and a fixed wavelength. Interpolation is used to determine the frequency of surface waves that propagate through the coating with the least attenuation. The phase velocity of the surface waves having this frequency is then calculated. The phase velocity is compared to known phase velocity/thickness tables to determine the thickness of the coating.

  2. The ARM Best Estimate Station-based Surface (ARMBESTNS) Data set

    DOE Data Explorer

    Qi,Tang; Xie,Shaocheng

    2015-08-06

    The ARM Best Estimate Station-based Surface (ARMBESTNS) data set merges together key surface measurements from the Southern Great Plains (SGP) sites. It is a twin data product of the ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set. Unlike the 2DGRID data set, the STNS data are reported at the original site locations and show the original information, except for the interpolation over time. Therefore, users have the flexibility to process the data with the approach more suitable for their applications.

  3. Ge growth on vicinal si(001) surfaces: island's shape and pair interaction versus miscut angle.

    PubMed

    Persichetti, L; Sgarlata, A; Fanfoni, M; Balzarotti, A

    2011-10-01

    A complete description of Ge growth on vicinal Si(001) surfaces is provided. The distinctive mechanisms of the epitaxial growth process on vicinal surfaces are clarified from the very early stages of Ge deposition to the nucleation of 3D islands. By interpolating high-resolution scanning tunneling microscopy measurements with continuum elasticity modeling, we assess the dependence of island's shape and elastic interaction on the substrate misorientation. Our results confirm that vicinal surfaces offer an additional degree of control over the shape and symmetry of self-assembled nanostructures.

  4. Using optimal interpolation to assimilate surface measurements and satellite AOD for ozone and PM2.5: A case study for July 2011.

    PubMed

    Tang, Youhua; Chai, Tianfeng; Pan, Li; Lee, Pius; Tong, Daniel; Kim, Hyun-Cheol; Chen, Weiwei

    2015-10-01

    We employed an optimal interpolation (OI) method to assimilate AIRNow ozone/PM2.5 and MODIS (Moderate Resolution Imaging Spectroradiometer) aerosol optical depth (AOD) data into the Community Multi-scale Air Quality (CMAQ) model to improve the ozone and total aerosol concentration for the CMAQ simulation over the contiguous United States (CONUS). AIRNow data assimilation was applied to the boundary layer, and MODIS AOD data were used to adjust total column aerosol. Four OI cases were designed to examine the effects of uncertainty setting and assimilation time; two of these cases used uncertainties that varied in time and location, or "dynamic uncertainties." More frequent assimilation and higher model uncertainties pushed the modeled results closer to the observation. Our comparison over a 24-hr period showed that ozone and PM2.5 mean biases could be reduced from 2.54 ppbV to 1.06 ppbV and from -7.14 µg/m³ to -0.11 µg/m³, respectively, over CONUS, while their correlations were also improved. Comparison to DISCOVER-AQ 2011 aircraft measurement showed that surface ozone assimilation applied to the CMAQ simulation improves regional low-altitude (below 2 km) ozone simulation. This paper described an application of using optimal interpolation method to improve the model's ozone and PM2.5 estimation using surface measurement and satellite AOD. It highlights the usage of the operational AIRNow data set, which is available in near real time, and the MODIS AOD. With a similar method, we can also use other satellite products, such as the latest VIIRS products, to improve PM2.5 prediction.

  5. A new solution-adaptive grid generation method for transonic airfoil flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.; Holst, T. L.

    1981-01-01

    The clustering algorithm is controlled by a second-order, ordinary differential equation which uses the airfoil surface density gradient as a forcing function. The solution to this differential equation produces a surface grid distribution which is automatically clustered in regions with large gradients. The interior grid points are established from this surface distribution by using an interpolation scheme which is fast and retains the desirable properties of the original grid generated from the standard elliptic equation approach.

  6. A hybrid incremental projection method for thermal-hydraulics applications

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong

    2016-07-01

    A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.

  7. A hybrid incremental projection method for thermal-hydraulics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.

    In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less

  8. A hybrid incremental projection method for thermal-hydraulics applications

    DOE PAGES

    Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...

    2016-07-01

    In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less

  9. An alcohol message beneath the surface of ER: how implicit memory influences viewers' health attitudes and intentions using entertainment-education.

    PubMed

    Kim, Kyongseok; Lee, Mina; Macias, Wendy

    2014-01-01

    While previous research on entertainment-education has assessed its effectiveness, primarily at the conscious level (e.g., free recall and self-reported change in knowledge), few studies have explored its effect on viewers' implicit knowledge. To fill this gap, this study examined the mechanism through which viewers form implicit memory of short health messages inserted in a primetime TV show and its preconscious effects on viewers' health attitudes and intentions. An experiment was conducted using a 3-group (health message: present vs. absent vs. control), posttest-only design with additional planned analyses of differences by subject variables (past experience and involvement). Overall, findings supported the hypothesized effects of implicit memory of a brief antialcohol message embedded in an ER episode on college students' attitudes and intentions against binge drinking. Results showed that participants who were exposed to the health message reported less positive attitudes toward binge drinking and lower intentions to binge drink, compared with those who were not exposed; the causal relations among viewers' implicit memory, attitudes, and intentions were also validated. Results also showed that individuals' past experience and involvement moderated the effects of the health message on attitudes and intentions. Theoretical explanations and practical implications are discussed.

  10. Implicit Solvation Parameters Derived from Explicit Water Forces in Large-Scale Molecular Dynamics Simulations

    PubMed Central

    2012-01-01

    Implicit solvation is a mean force approach to model solvent forces acting on a solute molecule. It is frequently used in molecular simulations to reduce the computational cost of solvent treatment. In the first instance, the free energy of solvation and the associated solvent–solute forces can be approximated by a function of the solvent-accessible surface area (SASA) of the solute and differentiated by an atom–specific solvation parameter σiSASA. A procedure for the determination of values for the σiSASA parameters through matching of explicit and implicit solvation forces is proposed. Using the results of Molecular Dynamics simulations of 188 topologically diverse protein structures in water and in implicit solvent, values for the σiSASA parameters for atom types i of the standard amino acids in the GROMOS force field have been determined. A simplified representation based on groups of atom types σgSASA was obtained via partitioning of the atom–type σiSASA distributions by dynamic programming. Three groups of atom types with well separated parameter ranges were obtained, and their performance in implicit versus explicit simulations was assessed. The solvent forces are available at http://mathbio.nimr.mrc.ac.uk/wiki/Solvent_Forces. PMID:23180979

  11. Effects of divided attention and speeded responding on implicit and explicit retrieval of artificial grammar knowledge.

    PubMed

    Helman, Shaun; Berry, Dianne C

    2003-07-01

    The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea & Price, 2001).

  12. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  13. Evolution of Western Mediterranean Sea Surface Temperature between 1985 and 2005: a complementary study in situ, satellite and modelling approaches

    NASA Astrophysics Data System (ADS)

    Troupin, C.; Lenartz, F.; Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Ouberdous, M.; Beckers, J.-M.

    2009-04-01

    In order to evaluate the variability of the sea surface temperature (SST) in the Western Mediterranean Sea between 1985 and 2005, an integrated approach combining geostatistical tools and modelling techniques has been set up. The objectives are: underline the capability of each tool to capture characteristic phenomena, compare and assess the quality of their outputs, infer an interannual trend from the results. Diva (Data Interpolating Variationnal Analysis, Brasseur et al. (1996) Deep-Sea Res.) was applied on a collection of in situ data gathered from various sources (World Ocean Database 2005, Hydrobase2, Coriolis and MedAtlas2), from which duplicates and suspect values were removed. This provided monthly gridded fields in the region of interest. Heterogeneous time data coverage was taken into account by computing and removing the annual trend, provided by Diva detrending tool. Heterogeneous correlation length was applied through an advection constraint. Statistical technique DINEOF (Data Interpolation with Empirical Orthogonal Functions, Alvera-Azc

  14. New Mass-Conserving Bedrock Topography for Pine Island Glacier Impacts Simulated Decadal Rates of Mass Loss

    NASA Astrophysics Data System (ADS)

    Nias, I. J.; Cornford, S. L.; Payne, A. J.

    2018-04-01

    High-resolution ice flow modeling requires bedrock elevation and ice thickness data, consistent with one another and with modeled physics. Previous studies have shown that gridded ice thickness products that rely on standard interpolation techniques (such as Bedmap2) can be inconsistent with the conservation of mass, given observed velocity, surface elevation change, and surface mass balance, for example, near the grounding line of Pine Island Glacier, West Antarctica. Using the BISICLES ice flow model, we compare results of simulations using both Bedmap2 bedrock and thickness data, and a new interpolation method that respects mass conservation. We find that simulations using the new geometry result in higher sea level contribution than Bedmap2 and reveal decadal-scale trends in the ice stream dynamics. We test the impact of several sliding laws and find that it is at least as important to accurately represent the bedrock and initial ice thickness as the choice of sliding law.

  15. An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Anerson, W. Kyle; Bonhaus, Daryl L.

    1994-01-01

    An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.

  16. Estimating Small-area Populations by Age and Sex Using Spatial Interpolation and Statistical Inference Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L

    The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, andmore » a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.« less

  17. Can a continuum solvent model reproduce the free energy landscape of a -hairpin folding in water?

    NASA Astrophysics Data System (ADS)

    Zhou, Ruhong; Berne, Bruce J.

    2002-10-01

    The folding free energy landscape of the C-terminal -hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the -hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native -strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this -hairpin. Furthermore, the -hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  18. Can a continuum solvent model reproduce the free energy landscape of a β-hairpin folding in water?

    PubMed Central

    Zhou, Ruhong; Berne, Bruce J.

    2002-01-01

    The folding free energy landscape of the C-terminal β-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the β-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native β-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this β-hairpin. Furthermore, the β-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and ≈80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields. PMID:12242327

  19. Can a continuum solvent model reproduce the free energy landscape of a beta -hairpin folding in water?

    PubMed

    Zhou, Ruhong; Berne, Bruce J

    2002-10-01

    The folding free energy landscape of the C-terminal beta-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the beta-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native beta-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this beta-hairpin. Furthermore, the beta-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and approximately equal 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  20. Assessment of Potential Location of High Arsenic Contamination Using Fuzzy Overlay and Spatial Anisotropy Approach in Iron Mine Surrounding Area

    PubMed Central

    Wirojanagud, Wanpen; Srisatit, Thares

    2014-01-01

    Fuzzy overlay approach on three raster maps including land slope, soil type, and distance to stream can be used to identify the most potential locations of high arsenic contamination in soils. Verification of high arsenic contamination was made by collection samples and analysis of arsenic content and interpolation surface by spatial anisotropic method. A total of 51 soil samples were collected at the potential contaminated location clarified by fuzzy overlay approach. At each location, soil samples were taken at the depth of 0.00-1.00 m from the surface ground level. Interpolation surface of the analysed arsenic content using spatial anisotropic would verify the potential arsenic contamination location obtained from fuzzy overlay outputs. Both outputs of the spatial surface anisotropic and the fuzzy overlay mapping were significantly spatially conformed. Three contaminated areas with arsenic concentrations of 7.19 ± 2.86, 6.60 ± 3.04, and 4.90 ± 2.67 mg/kg exceeded the arsenic content of 3.9 mg/kg, the maximum concentration level (MCL) for agricultural soils as designated by Office of National Environment Board of Thailand. It is concluded that fuzzy overlay mapping could be employed for identification of potential contamination area with the verification by surface anisotropic approach including intensive sampling and analysis of the substances of interest. PMID:25110751

  1. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  2. Suitability of satellite derived and gridded sea surface temperature data sets for calibrating high-resolution marine proxy records

    NASA Astrophysics Data System (ADS)

    Ouellette, G., Jr.; DeLong, K. L.

    2016-02-01

    High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.

  3. Gonioreflectometric properties of metal surfaces

    NASA Astrophysics Data System (ADS)

    Jaanson, P.; Manoocheri, F.; Mäntynen, H.; Gergely, M.; Widlowski, J.-L.; Ikonen, E.

    2014-12-01

    Angularly resolved measurements of scattered light from surfaces can provide useful information in various fields of research and industry, such as computer graphics, satellite based Earth observation etc. In practice, empirical or physics-based models are needed to interpolate the measurement results, because a thorough characterization of the surfaces under all relevant conditions may not be feasible. In this work, plain and anodized metal samples were prepared and measured optically for bidirectional reflectance distribution function (BRDF) and mechanically for surface roughness. Two models for BRDF (Torrance-Sparrow model and a polarimetric BRDF model) were fitted to the measured values. A better fit was obtained for plain metal surfaces than for anodized surfaces.

  4. Resolution-independent surface rendering using programmable graphics hardware

    DOEpatents

    Loop, Charles T.; Blinn, James Frederick

    2008-12-16

    Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.

  5. Surface reconstruction and deformation monitoring of stratospheric airship based on laser scanning technology

    NASA Astrophysics Data System (ADS)

    Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei

    2018-04-01

    Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.

  6. Interpolation Environment of Tensor Mathematics at the Corpuscular Stage of Computational Experiments in Hydromechanics

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia

    2018-02-01

    Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.

  7. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  8. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  9. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  10. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

    NASA Technical Reports Server (NTRS)

    Allred, Joel

    2012-01-01

    Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

  11. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 4; Determination of surface and atmosphere fluxes and temporally and spatially averaged products (subsystems 5-12); Determination of surface and atmosphere fluxes and temporally and spatially averaged products

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.

  12. A General Surface Representation Module Designed for Geodesy

    DTIC Science & Technology

    1980-06-01

    one considers as a reasonable interpolation function, one of the often accepted compromises is the choice q = 2 (Schumnaker, 1976, Bybee and Bedross...Fast Fourier Transform: Englewood Cliffs, New Jersey. Bybee , J.E. and G.M. Bedross (1978): The IPIN computer network control softward. In: Proceedings

  13. Application of Geographic Information System Methods to Identify Areas Yielding Water that will be Replaced by Water from the Colorado River in the Vidal and Chemehuevi Areas, California, and the Mohave Mesa Area, Arizona

    USGS Publications Warehouse

    Spangler, Lawrence E.; Angeroth, Cory E.; Walton, Sarah J.

    2008-01-01

    Relations between the elevation of the static water level in wells and the elevation of the accounting surface within the Colorado River aquifer in the vicinity of Vidal, California, the Chemehuevi Indian Reservation, California, and on Mohave Mesa, Arizona, were used to determine which wells outside the flood plain of the Colorado River are presumed to yield water that will be replaced by water from the Colorado River. Wells that have a static water-level elevation equal to or below the elevation of the accounting surface are presumed to yield water that will be replaced by water from the Colorado River. Geographic Information System (GIS) interpolation tools were used to produce maps of areas where water levels are above, below, and near (within ? 0.84 foot) the accounting surface. Calculated water-level elevations and interpolated accounting-surface elevations were determined for 33 wells in the vicinity of Vidal, 16 wells in the Chemehuevi area, and 35 wells on Mohave Mesa. Water-level measurements generally were taken in the last 10 years with steel and electrical tapes accurate to within hundredths of a foot. A Differential Global Positioning System (DGPS) was used to determine land-surface elevations to within an operational accuracy of ? 0.43 foot, resulting in calculated water-level elevations having a 95-percent confidence interval of ? 0.84 foot. In the Vidal area, differences in elevation between the accounting surface and measured water levels range from -2.7 feet below to as much as 17.6 feet above the accounting surface. Relative differences between the elevation of the water level and the elevation of the accounting surface decrease from west to east and from north to south. In the Chemehuevi area, differences in elevation range from -3.7 feet below to as much as 8.7 feet above the accounting surface, which is established at 449.6 feet in the vicinity of Lake Havasu. In all of the Mohave Mesa area, the water-level elevation is near or below the elevation of the accounting surface. Differences in elevation between water levels and the accounting surface range from -0.2 to -11.3 feet, with most values exceeding -7.0 feet. In general, the ArcGIS Triangulated Irregular Network (TIN) Contour and Natural Neighbor tools reasonably represent areas where the elevation of water levels in wells is above, below, and near (within ? 0.84 foot) the elevation of the accounting surface in the Vidal and Chemehuevi study areas and accurately delineate areas around outlying wells and where anomalies exist. The TIN Contour tool provides a strict linear interpolation while the Natural Neighbor tool provides a smoothed interpolation. Using the default options in ArcGIS, the Inverse Distance Weighted (IDW) and Spline tools also reasonably represent areas above, below, and near the accounting surface in the Vidal and Chemehuevi areas. However, spatial extent of and boundaries between areas above, below, and near the accounting surface vary among the GIS methods, which results largely from the fundamentally different mathematical approaches used by these tools. The limited number and spatial distribution of wells in comparison to the size of the areas, and the locations and relative differences in elevation between water levels and the accounting surface of wells with anomalous water levels also influence the contouring by each of these methods. Qualitatively, the Natural Neighbor tool appears to provide the best representation of the difference between water-level and accounting-surface elevations in the study areas, on the basis of available well data.

  14. Algebraic dynamic multilevel method for compositional flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cusini, Matteo; Fryer, Barnaby; van Kruijsdijk, Cor; Hajibeygi, Hadi

    2018-02-01

    This paper presents the algebraic dynamic multilevel method (ADM) for compositional flow in three dimensional heterogeneous porous media in presence of capillary and gravitational effects. As a significant advancement compared to the ADM for immiscible flows (Cusini et al., 2016) [33], here, mass conservation equations are solved along with k-value based thermodynamic equilibrium equations using a fully-implicit (FIM) coupling strategy. Two different fine-scale compositional formulations are considered: (1) the natural variables and (2) the overall-compositions formulation. At each Newton's iteration the fine-scale FIM Jacobian system is mapped to a dynamically defined (in space and time) multilevel nested grid. The appropriate grid resolution is chosen based on the contrast of user-defined fluid properties and on the presence of specific features (e.g., well source terms). Consistent mapping between different resolutions is performed by the means of sequences of restriction and prolongation operators. While finite-volume restriction operators are employed to ensure mass conservation at all resolutions, various prolongation operators are considered. In particular, different interpolation strategies can be used for the different primary variables, and multiscale basis functions are chosen as pressure interpolators so that fine scale heterogeneities are accurately accounted for across different resolutions. Several numerical experiments are conducted to analyse the accuracy, efficiency and robustness of the method for both 2D and 3D domains. Results show that ADM provides accurate solutions by employing only a fraction of the number of grid-cells employed in fine-scale simulations. As such, it presents a promising approach for large-scale simulations of multiphase flow in heterogeneous reservoirs with complex non-linear fluid physics.

  15. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  16. An Unconditionally Stable Fully Conservative Semi-Lagrangian Method (PREPRINT)

    DTIC Science & Technology

    2010-08-07

    Alessandrini. An Hamiltonian interface SPH formulation for multi-fluid and free surface flows . J. of Comput. Phys., 228(22):8380–8393, 2009. [11] J.T...and J. Welch. Numerical Calculation of Time-Dependent Viscous Incompressible Flow of Fluid with Free Surface . Phys. Fluids, 8:2182–2189, 1965. [14... flow is divergence free , one would generally expect these lines to be commensurate, however, due to numerical errors in interpolation there is some

  17. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  18. Ablation, Thermal Response, and Chemistry Program for Analysis of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Chen, Yih-Kanq

    2010-01-01

    In previous work, the authors documented the Multicomponent Ablation Thermochemistry (MAT) and Fully Implicit Ablation and Thermal response (FIAT) programs. In this work, key features from MAT and FIAT were combined to create the new Fully Implicit Ablation, Thermal response, and Chemistry (FIATC) program. FIATC is fully compatible with FIAT (version 2.5) but has expanded capabilities to compute the multispecies surface chemistry and ablation rate as part of the surface energy balance. This new methodology eliminates B' tables, provides blown species fractions as a function of time, and enables calculations that would otherwise be impractical (e.g. 4+ dimensional tables) such as pyrolysis and ablation with kinetic rates or unequal diffusion coefficients. Equations and solution procedures are presented, then representative calculations of equilibrium and finite-rate ablation in flight and ground-test environments are discussed.

  19. Structural-Thermal-Optical Program (STOP)

    NASA Technical Reports Server (NTRS)

    Lee, H. P.

    1972-01-01

    A structural thermal optical computer program is developed which uses a finite element approach and applies the Ritz method for solving heat transfer problems. Temperatures are represented at the vertices of each element and the displacements which yield deformations at any point of the heated surface are interpolated through grid points.

  20. Stellar mass and age determinations . I. Grids of stellar models from Z = 0.006 to 0.04 and M = 0.5 to 3.5 M⊙

    NASA Astrophysics Data System (ADS)

    Mowlavi, N.; Eggenberger, P.; Meynet, G.; Ekström, S.; Georgy, C.; Maeder, A.; Charbonnel, C.; Eyer, L.

    2012-05-01

    Aims: We present dense grids of stellar models suitable for comparison with observable quantities measured with great precision, such as those derived from binary systems or planet-hosting stars. Methods: We computed new Geneva models without rotation at metallicities Z = 0.006, 0.01, 0.014, 0.02, 0.03, and 0.04 (i.e. [Fe/H] from -0.33 to +0.54) and with mass in small steps from 0.5 to 3.5 M⊙. Great care was taken in the procedure for interpolating between tracks in order to compute isochrones. Results: Several properties of our grids are presented as a function of stellar mass and metallicity. Those include surface properties in the Hertzsprung-Russell diagram, internal properties including mean stellar density, sizes of the convective cores, and global asteroseismic properties. Conclusions: We checked our interpolation procedure and compared interpolated tracks with computed tracks. The deviations are less than 1% in radius and effective temperatures for most of the cases considered. We also checked that the present isochrones provide nice fits to four couples of observed detached binaries and to the observed sequences of the open clusters NGC 3532 and M 67. Including atomic diffusion in our models with M < 1.1 M⊙ leads to variations in the surface abundances that should be taken into account when comparing with observational data of stars with measured metallicities. For that purpose, iso-Zsurf lines are computed. These can be requested for download from a dedicated web page, together with tracks at masses and metallicities within the limits covered by the grids. The validity of the relations linking Z and [Fe/H] is also re-assessed in light of the surface abundance variations in low-mass stars. Table D.1 for the basic tracks is available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/541/A41, and on our web site http://obswww.unige.ch/Recherche/evol/-Database-. Tables for interpolated tracks, iso-Zsurf lines and isochrones can be computed, on demand, from our web site.Appendices are available in electronic form at http://www.aanda.org

  1. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  2. Multidisciplinary Thermal Analysis of Hot Aerospace Structures

    DTIC Science & Technology

    2010-05-02

    Seidel iteration. Such a strategy simplifies explicit/implicit treatment , subcycling, load balancing, software modularity, and replacements as better... Stefan -Boltzmann constant , E is the emissivity of the surface, f is the form factor from the surface to the reference surface, Br is the temperature of...Stokes equations using Gauss- Seidel line Relaxation, Computers and Fluids, 17, pp.l35-150, 1989. [22] Hung C.M. and MacCormack R.W., Numerical

  3. The algorithmic details of polynomials application in the problems of heat and mass transfer control on the hypersonic aircraft permeable surfaces

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2018-03-01

    The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.

  4. Optimizing weather radar observations using an adaptive multiquadric surface fitting algorithm

    NASA Astrophysics Data System (ADS)

    Martens, Brecht; Cabus, Pieter; De Jongh, Inge; Verhoest, Niko

    2013-04-01

    Real time forecasting of river flow is an essential tool in operational water management. Such real time modelling systems require well calibrated models which can make use of spatially distributed rainfall observations. Weather radars provide spatial data, however, since radar measurements are sensitive to a large range of error sources, often a discrepancy between radar observations and ground-based measurements, which are mostly considered as ground truth, can be observed. Through merging ground observations with the radar product, often referred to as data merging, one may force the radar observations to better correspond to the ground-based measurements, without losing the spatial information. In this paper, radar images and ground-based measurements of rainfall are merged based on interpolated gauge-adjustment factors (Moore et al., 1998; Cole and Moore, 2008) or scaling factors. Using the following equation, scaling factors (C(xα)) are calculated at each position xα where a gauge measurement (Ig(xα)) is available: Ig(xα)+-? C (xα) = Ir(xα)+ ? (1) where Ir(xα) is the radar-based observation in the pixel overlapping the rain gauge and ? is a constant making sure the scaling factor can be calculated when Ir(xα) is zero. These scaling factors are interpolated on the radar grid, resulting in a unique scaling factor for each pixel. Multiquadric surface fitting is used as an interpolation algorithm (Hardy, 1971): C*(x0) = aTv + a0 (2) where C*(x0) is the prediction at location x0, the vector a (Nx1, with N the number of ground-based measurements used) and the constant a0 parameters describing the surface and v an Nx1 vector containing the (Euclidian) distance between each point xα used in the interpolation and the point x0. The parameters describing the surface are derived by forcing the surface to be an exact interpolator and impose that the sum of the parameters in a should be zero. However, often, the surface is allowed to pass near the observations (i.e. the observed scaling factors C(xα)) on a distance aαK by introducing an offset parameter K, which results in slightly different equations to calculate a and a0. The described technique is currently being used by the Flemish Environmental Agency in an online forecasting system of river discharges within Flanders (Belgium). However, rescaling the radar data using the described algorithm is not always giving rise to an improved weather radar product. Probably one of the main reasons is the parameters K and ? which are implemented as constants. It can be expected that, among others, depending on the characteristics of the rainfall, different values for the parameters should be used. Adaptation of the parameter values is achieved by an online calibration of K and ? at each time step (every 15 minutes), using validated rain gauge measurements as ground truth. Results demonstrate that rescaling radar images using optimized values for K and ? at each time step lead to a significant improvement of the rainfall estimation, which in turn will result in higher quality discharge predictions. Moreover, it is shown that calibrated values for K and ? can be obtained in near-real time. References Cole, S. J., and Moore, R. J. (2008). Hydrological modelling using raingauge- and radar-based estimators of areal rainfall. Journal of Hydrology, 358(3-4), 159-181. Hardy, R.L., (1971) Multiquadric equations of topography and other irregular surfaces, Journal of Geophysical Research, 76(8): 1905-1915. Moore, R. J., Watson, B. C., Jones, D. A. and Black, K. B. (1989). London weather radar local calibration study. Technical report, Institute of Hydrology.

  5. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models.

    PubMed

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation.

  6. Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.

    1980-01-01

    Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.

  7. A gradient enhanced plasticity-damage microplane model for concrete

    NASA Astrophysics Data System (ADS)

    Zreid, Imadeddin; Kaliske, Michael

    2018-03-01

    Computational modeling of concrete poses two main types of challenges. The first is the mathematical description of local response for such a heterogeneous material under all stress states, and the second is the stability and efficiency of the numerical implementation in finite element codes. The paper at hand presents a comprehensive approach addressing both issues. Adopting the microplane theory, a combined plasticity-damage model is formulated and regularized by an implicit gradient enhancement. The plasticity part introduces a new microplane smooth 3-surface cap yield function, which provides a stable numerical solution within an implicit finite element algorithm. The damage part utilizes a split, which can describe the transition of loading between tension and compression. Regularization of the model by the implicit gradient approach eliminates the mesh sensitivity and numerical instabilities. Identification methods for model parameters are proposed and several numerical examples of plain and reinforced concrete are carried out for illustration.

  8. Fast viscosity solutions for shape from shading under a more realistic imaging model

    NASA Astrophysics Data System (ADS)

    Wang, Guohui; Han, Jiuqiang; Jia, Honghai; Zhang, Xinman

    2009-11-01

    Shape from shading (SFS) has been a classical and important problem in the domain of computer vision. The goal of SFS is to reconstruct the 3-D shape of an object from its 2-D intensity image. To this end, an image irradiance equation describing the relation between the shape of a surface and its corresponding brightness variations is used. Then it is derived as an explicit partial differential equation (PDE). Using the nonlinear programming principle, we propose a detailed solution to Prados and Faugeras's implicit scheme for approximating the viscosity solution of the resulting PDE. Furthermore, by combining implicit and semi-implicit schemes, a new approximation scheme is presented. In order to accelerate the convergence speed, we adopt the Gauss-Seidel idea and alternating sweeping strategy to the approximation schemes. Experimental results on both synthetic and real images are performed to demonstrate that the proposed methods are fast and accurate.

  9. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  10. Solvent Reaction Field Potential inside an Uncharged Globular Protein: A Bridge between Implicit and Explicit Solvent Models?

    PubMed Central

    Baker, Nathan A.; McCammon, J. Andrew

    2008-01-01

    The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217

  11. Solvent reaction field potential inside an uncharged globular protein: A bridge between implicit and explicit solvent models?

    NASA Astrophysics Data System (ADS)

    Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew

    2007-10-01

    The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.

  12. Monitoring global snow cover

    NASA Technical Reports Server (NTRS)

    Armstrong, Richard; Hardman, Molly

    1991-01-01

    A snow model that supports the daily, operational analysis of global snow depth and age has been developed. It provides improved spatial interpolation of surface reports by incorporating digital elevation data, and by the application of regionalized variables (kriging) through the use of a global snow depth climatology. Where surface observations are inadequate, the model applies satellite remote sensing. Techniques for extrapolation into data-void mountain areas and a procedure to compute snow melt are also contained in the model.

  13. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology

    NASA Technical Reports Server (NTRS)

    Allen, P. A.; Wells, D. N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  14. Free energy landscape of protein folding in water: explicit vs. implicit solvent.

    PubMed

    Zhou, Ruhong

    2003-11-01

    The Generalized Born (GB) continuum solvent model is arguably the most widely used implicit solvent model in protein folding and protein structure prediction simulations; however, it still remains an open question on how well the model behaves in these large-scale simulations. The current study uses the beta-hairpin from C-terminus of protein G as an example to explore the folding free energy landscape with various GB models, and the results are compared to the explicit solvent simulations and experiments. All free energy landscapes are obtained from extensive conformation space sampling with a highly parallel replica exchange method. Because solvation model parameters are strongly coupled with force fields, five different force field/solvation model combinations are examined and compared in this study, namely the explicit solvent model: OPLSAA/SPC model, and the implicit solvent models: OPLSAA/SGB (Surface GB), AMBER94/GBSA (GB with Solvent Accessible Surface Area), AMBER96/GBSA, and AMBER99/GBSA. Surprisingly, we find that the free energy landscapes from implicit solvent models are quite different from that of the explicit solvent model. Except for AMBER96/GBSA, all other implicit solvent models find the lowest free energy state not the native state. All implicit solvent models show erroneous salt-bridge effects between charged residues, particularly in OPLSAA/SGB model, where the overly strong salt-bridge effect results in an overweighting of a non-native structure with one hydrophobic residue F52 expelled from the hydrophobic core in order to make better salt bridges. On the other hand, both AMBER94/GBSA and AMBER99/GBSA models turn the beta-hairpin in to an alpha-helix, and the alpha-helical content is much higher than the previously reported alpha-helices in an explicit solvent simulation with AMBER94 (AMBER94/TIP3P). Only AMBER96/GBSA shows a reasonable free energy landscape with the lowest free energy structure the native one despite an erroneous salt-bridge between D47 and K50. Detailed results on free energy contour maps, lowest free energy structures, distribution of native contacts, alpha-helical content during the folding process, NOE comparison with NMR, and temperature dependences are reported and discussed for all five models. Copyright 2003 Wiley-Liss, Inc.

  15. Investigations of α-helix↔β-sheet transition pathways in a miniprotein using the finite-temperature string method

    PubMed Central

    Ovchinnikov, Victor; Karplus, Martin

    2014-01-01

    A parallel implementation of the finite-temperature string method is described, which takes into account the invariance of coordinates with respect to rigid-body motions. The method is applied to the complex α-helix↔β-sheet transition in a β-hairpin miniprotein in implicit solvent, which exhibits much of the complexity of conformational changes in proteins. Two transition paths are considered, one derived from a linear interpolant between the endpoint structures and the other derived from a targeted dynamics simulation. Two methods for computing the conformational free energy (FE) along the string are compared, a restrained method, and a tessellation method introduced by E. Vanden-Eijnden and M. Venturoli [J. Chem. Phys. 130, 194103 (2009)]. It is found that obtaining meaningful free energy profiles using the present atom-based coordinates requires restricting sampling to a vicinity of the converged path, where the hyperplanar approximation to the isocommittor surface is sufficiently accurate. This sampling restriction can be easily achieved using restraints or constraints. The endpoint FE differences computed from the FE profiles are validated by comparison with previous calculations using a path-independent confinement method. The FE profiles are decomposed into the enthalpic and entropic contributions, and it is shown that the entropy difference contribution can be as large as 10 kcal/mol for intermediate regions along the path, compared to 15–20 kcal/mol for the enthalpy contribution. This result demonstrates that enthalpic barriers for transitions are offset by entropic contributions arising from the existence of different paths across a barrier. The possibility of using systematically coarse-grained representations of amino acids, in the spirit of multiple interaction site residue models, is proposed as a means to avoid ad hoc sampling restrictions to narrow transition tubes. PMID:24811667

  16. Scientific or rule-of-thumb techniques of ground-water management--Which will prevail?

    USGS Publications Warehouse

    McGuinness, Charles Lee

    1969-01-01

    Emphasis in ground-water development, once directed largely to quantitatively minor (but sociologically vital) service of human and stock needs, is shifting: aquifers are treated as possible regulating reservoirs managed conjunctively with surface water. Too, emphasis on reducing stream pollution is stimulating interest in aquifers as possible waste-storage media. Such management of aquifers requires vast amounts of data plus a much better understanding of aquifer-system behavior than now exists. Implicit in this deficiency of knowledge is a need for much new research, lest aquifers be managed according to ineffective rule-of-thumb standards, or even abandoned as unmanageable. The geohydrologist's task is to define both internal and boundary characteristics of aquifer systems. Stratigraphy is a primary determinant of these characteristics, but stratigraphically minor features may make aquifers transcend stratigraphic boundaries. For example, a structurally insignificant fracture may carry more water than a major fault; a minor stratigraphic discontinuity may be a major hydrologic boundary. Hence, there is a need for ways of defining aquifer boundaries and quantifying aquifer and confining-bed characteristics that are very different from ordinary stratigraphic techniques. Among critical needs are techniques for measuring crossbed permeability; for extrapolating and interpolating point data on direction and magnitude of permeability in defining aquifer geometry; and for accurately measuring geochemical properties of water and aquifer material, and interpreting those measurements in terms of source of water, rate of movement, and waste-sorbing capacities of aquifers and of confining beds--in general, techniques adequate for predicting aquifer response to imposed forces whether static, hydraulic, thermal, or chemical. Only when such predictions can be made routinely can aquifer characteristics be inserted into a master model that incorporates both the hydrologic and the socioeconomic facts necessary to intelligent social actions involving water.

  17. Investigations of α-helix↔β-sheet transition pathways in a miniprotein using the finite-temperature string method

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Victor; Karplus, Martin

    2014-05-01

    A parallel implementation of the finite-temperature string method is described, which takes into account the invariance of coordinates with respect to rigid-body motions. The method is applied to the complex α-helix↔β-sheet transition in a β-hairpin miniprotein in implicit solvent, which exhibits much of the complexity of conformational changes in proteins. Two transition paths are considered, one derived from a linear interpolant between the endpoint structures and the other derived from a targeted dynamics simulation. Two methods for computing the conformational free energy (FE) along the string are compared, a restrained method, and a tessellation method introduced by E. Vanden-Eijnden and M. Venturoli [J. Chem. Phys. 130, 194103 (2009)]. It is found that obtaining meaningful free energy profiles using the present atom-based coordinates requires restricting sampling to a vicinity of the converged path, where the hyperplanar approximation to the isocommittor surface is sufficiently accurate. This sampling restriction can be easily achieved using restraints or constraints. The endpoint FE differences computed from the FE profiles are validated by comparison with previous calculations using a path-independent confinement method. The FE profiles are decomposed into the enthalpic and entropic contributions, and it is shown that the entropy difference contribution can be as large as 10 kcal/mol for intermediate regions along the path, compared to 15-20 kcal/mol for the enthalpy contribution. This result demonstrates that enthalpic barriers for transitions are offset by entropic contributions arising from the existence of different paths across a barrier. The possibility of using systematically coarse-grained representations of amino acids, in the spirit of multiple interaction site residue models, is proposed as a means to avoid ad hoc sampling restrictions to narrow transition tubes.

  18. EOS Interpolation and Thermodynamic Consistency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammel, J. Tinka

    2015-11-16

    As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.

  19. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    PubMed

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  20. An unconditionally stable Runge-Kutta method for unsteady flows

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Chima, Rodrick V.

    1988-01-01

    A quasi-three dimensional analysis was developed for unsteady rotor-stator interaction in turbomachinery. The analysis solves the unsteady Euler or thin-layer Navier-Stokes equations in a body fitted coordinate system. It accounts for the effects of rotation, radius change, and stream surface thickness. The Baldwin-Lomax eddy viscosity model is used for turbulent flows. The equations are integrated in time using a four stage Runge-Kutta scheme with a constant time step. Implicit residual smoothing was employed to accelerate the solution of the time accurate computations. The scheme is described and accuracy analyses are given. Results are shown for a supersonic through-flow fan designed for NASA Lewis. The rotor:stator blade ratio was taken as 1:1. Results are also shown for the first stage of the Space Shuttle Main Engine high pressure fuel turbopump. Here the blade ratio is 2:3. Implicit residual smoothing was used to increase the time step limit of the unsmoothed scheme by a factor of six with negligible differences in the unsteady results. It is felt that the implicitly smoothed Runge-Kutta scheme is easily competitive with implicit schemes for unsteady flows while retaining the simplicity of an explicit scheme.

  1. A Novel Face-on-Face Contact Method for Nonlinear Solid Mechanics

    NASA Astrophysics Data System (ADS)

    Wopschall, Steven Robert

    The implicit solution to contact problems in nonlinear solid mechanics poses many difficulties. Traditional node-to-segment methods may suffer from locking and experience contact force chatter in the presence of sliding. More recent developments include mortar based methods, which resolve local contact interactions over face-pairs and feature a kinematic constraint in integral form that smoothes contact behavior, especially in the presence of sliding. These methods have been shown to perform well in the presence of geometric nonlinearities and are demonstratively more robust than node-to-segment methods. These methods are typically biased, however, interpolating contact tractions and gap equations on a designated non-mortar face, which leads to an asymmetry in the formulation. Another challenge is constraint enforcement. The general selection of the active set of constraints is brought with difficulty, often leading to non-physical solutions and easily resulting in missed face-pair interactions. Details on reliable constraint enforcement methods are lacking in the greater contact literature. This work presents an unbiased contact formulation utilizing a median-plane methodology. Up to linear polynomials are used for the discrete pressure representation and integral gap constraints are enforced using a novel subcycling procedure. This procedure reliably determines the active set of contact constraints leading to physical and kinematically admissible solutions void of heuristics and user action. The contact method presented herein successfully solves difficult quasi-static contact problems in the implicit computational setting. These problems feature finite deformations, material nonlinearity, and complex interface geometries, all of which are challenging characteristics for contact implementations and constraint enforcement algorithms. The subcycling procedure is a key feature of this method, handling active constraint selection for complex interfaces and mesh geometries.

  2. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  3. Integration of Geophysical Data into Structural Geological Modelling through Bayesian Networks

    NASA Astrophysics Data System (ADS)

    de la Varga, Miguel; Wellmann, Florian; Murdie, Ruth

    2016-04-01

    Structural geological models are widely used to represent the spatial distribution of relevant geological features. Several techniques exist to construct these models on the basis of different assumptions and different types of geological observations (e.g. Jessell et al., 2014). However, two problems are prevalent when constructing models: (i) observations and assumptions, and therefore also the constructed model, are subject to uncertainties, and (ii) additional information, such as geophysical data, is often available, but cannot be considered directly in the geological modelling step. In our work, we propose the integration of all available data into a Bayesian network including the generation of the implicit geological method by means of interpolation functions (Mallet, 1992; Lajaunie et al., 1997; Mallet, 2004; Carr et al., 2001; Hillier et al., 2014). As a result, we are able to increase the certainty of the resultant models as well as potentially learn features of our regional geology through data mining and information theory techniques. MCMC methods are used in order to optimize computational time and assure the validity of the results. Here, we apply the aforementioned concepts in a 3-D model of the Sandstone Greenstone Belt in the Archean Yilgarn Craton in Western Australia. The example given, defines the uncertainty in the thickness of greenstone as limited by Bouguer anomaly and the internal structure of the greenstone as limited by the magnetic signature of a banded iron formation. The incorporation of the additional data and specially the gravity provides an important reduction of the possible outcomes and therefore the overall uncertainty. References Carr, C. J., K. R. Beatson, B. J. Cherrie, J. T. Mitchell, R. W. Fright, C. B. McCallum, and R. T. Evans, 2001, Reconstruction and representation of 3D objects with radial basis functions: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 67-76. Jessell, M., Aillères, L., de Kemp, E., Lindsay, M., Wellmann, F., Hillier, M., ... & Martin, R. (2014). Next Generation Three-Dimensional Geologic Modeling and Inversion. Lajaunie, C., G. Courrioux, and L. Manuel, 1997, Foliation fields and 3D cartography in geology: Principles of a method based on potential interpolation: Mathematical Geology, 29, 571-584. Mallet, J.-L., 1992, Discrete smooth interpolation in geometric modelling: Computer-Aided Design, 24, 178-191 Mallet, L. J., 2004, Space-time mathematical framework for sedimentary geology: Mathematical Geology, 36, 1-32.

  4. White Sands Missile Range Main Cantonment and NASA Area Faults, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, Greg

    This is a zipped ArcGIS shapefile containing faults mapped for the Tularosa Basin geothermal play fairway analysis project. The faults were interpolated from gravity and seismic (NASA area) data, and from geomorphic features on aerial photography. Field work was also done for validation of faults which had surface expressions.

  5. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015.

    PubMed

    Abatzoglou, John T; Dobrowski, Solomon Z; Parks, Sean A; Hegewisch, Katherine C

    2018-01-09

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  6. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    NASA Astrophysics Data System (ADS)

    Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  7. Fully Implict Magneto-hydrodynamics Simulations of Coaxial Plasma Accelerators

    DOE PAGES

    Subramaniam, Vivek; Raja, Laxminarayan L.

    2017-01-05

    The resistive Magneto-Hydrodynamic (MHD) model describes the behavior of a strongly ionized plasma in the presence of external electric and magnetic fields. We developed a fully implicit MHD simulation tool to solve the resistive MHD governing equations in the context of a cell-centered finite-volume scheme. The primary objective of this study is to use the fully-implicit algorithm to obtain insights into the plasma acceleration and jet formation processes in Coaxial Plasma accelerators; electromagnetic acceleration devices that utilize self-induced magnetic fields to accelerate thermal plasmas to large velocities. We also carry out plasma-surface simulations in order to study the impact interactionsmore » when these high velocity plasma jets impinge on target material surfaces. Scaling studies are carried out to establish some basic functional relationships between the target-stagnation conditions and the current discharged between the coaxial electrodes.« less

  8. Comparison of volume and surface area nonpolar solvation free energy terms for implicit solvent simulations.

    PubMed

    Lee, Michael S; Olson, Mark A

    2013-07-28

    Implicit solvent models for molecular dynamics simulations are often composed of polar and nonpolar terms. Typically, the nonpolar solvation free energy is approximated by the solvent-accessible-surface area times a constant factor. More sophisticated approaches incorporate an estimate of the attractive dispersion forces of the solvent and∕or a solvent-accessible volume cavitation term. In this work, we confirm that a single volume-based nonpolar term most closely fits the dispersion and cavitation forces obtained from benchmark explicit solvent simulations of fixed protein conformations. Next, we incorporated the volume term into molecular dynamics simulations and find the term is not universally suitable for folding up small proteins. We surmise that while mean-field cavitation terms such as volume and SASA often tilt the energy landscape towards native-like folds, they also may sporadically introduce bottlenecks into the folding pathway that hinder the progression towards the native state.

  9. Comparison of volume and surface area nonpolar solvation free energy terms for implicit solvent simulations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Olson, Mark A.

    2013-07-01

    Implicit solvent models for molecular dynamics simulations are often composed of polar and nonpolar terms. Typically, the nonpolar solvation free energy is approximated by the solvent-accessible-surface area times a constant factor. More sophisticated approaches incorporate an estimate of the attractive dispersion forces of the solvent and/or a solvent-accessible volume cavitation term. In this work, we confirm that a single volume-based nonpolar term most closely fits the dispersion and cavitation forces obtained from benchmark explicit solvent simulations of fixed protein conformations. Next, we incorporated the volume term into molecular dynamics simulations and find the term is not universally suitable for folding up small proteins. We surmise that while mean-field cavitation terms such as volume and SASA often tilt the energy landscape towards native-like folds, they also may sporadically introduce bottlenecks into the folding pathway that hinder the progression towards the native state.

  10. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  11. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  12. Milky Way Mass Models and MOND

    NASA Astrophysics Data System (ADS)

    McGaugh, Stacy S.

    2008-08-01

    Using the Tuorla-Heidelberg model for the mass distribution of the Milky Way, I determine the rotation curve predicted by MOND (modified Newtonian dynamics). The result is in good agreement with the observed terminal velocities interior to the solar radius and with estimates of the Galaxy's rotation curve exterior thereto. There are no fit parameters: given the mass distribution, MOND provides a good match to the rotation curve. The Tuorla-Heidelberg model does allow for a variety of exponential scale lengths; MOND prefers short scale lengths in the range 2.0 kpc lesssim Rdlesssim 2.5 kpc. The favored value of Rd depends somewhat on the choice of interpolation function. There is some preference for the "simple" interpolation function as found by Famaey & Binney. I introduce an interpolation function that shares the advantages of the simple function on galaxy scales while having a much smaller impact in the solar system. I also solve the inverse problem, inferring the surface mass density distribution of the Milky Way from the terminal velocities. The result is a Galaxy with "bumps and wiggles" in both its luminosity profile and rotation curve that are reminiscent of those frequently observed in external galaxies.

  13. Interpolated Sounding and Gridded Sounding Value-Added Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toto, T.; Jensen, M.

    Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less

  14. Spatial Estimation of Sub-Hour Global Horizontal Irradiance Based on Official Observations and Remote Sensors

    PubMed Central

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-01-01

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102

  15. Spatial estimation of sub-hour Global Horizontal Irradiance based on official observations and remote sensors.

    PubMed

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-04-11

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).

  16. Holes in the ocean: Filling voids in bathymetric lidar data

    NASA Astrophysics Data System (ADS)

    Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie

    2011-04-01

    The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.

  17. Moho map of South America from receiver functions and surface waves

    NASA Astrophysics Data System (ADS)

    Lloyd, Simon; van der Lee, Suzan; FrançA, George Sand; AssumpçãO, Marcelo; Feng, Mei

    2010-11-01

    We estimate crustal structure and thickness of South America north of roughly 40°S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson's ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.

  18. Extracting Hydrologic Understanding from the Unique Space-time Sampling of the Surface Water and Ocean Topography (SWOT) Mission

    NASA Astrophysics Data System (ADS)

    Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.

    2017-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.

  19. Optimum interpolation analysis of basin-scale ¹³⁷Cs transport in surface seawater in the North Pacific Ocean.

    PubMed

    Inomata, Y; Aoyama, M; Tsumune, D; Motoi, T; Nakano, H

    2012-12-01

    ¹³⁷Cs is one of the conservative tracers applied to the study of oceanic circulation processes on decadal time scales. To investigate the spatial distribution and the temporal variation of ¹³⁷Cs concentrations in surface seawater in the North Pacific Ocean after 1957, a technique for optimum interpolation (OI) was applied to understand the behaviour of ¹³⁷Cs that revealed the basin-scale circulation of Cs ¹³⁷Cs in surface seawater in the North Pacific Ocean: ¹³⁷Cs deposited in the western North Pacific Ocean from global fallout (late 1950s and early 1960s) and from local fallout (transported from the Bikini and Enewetak Atolls during the late 1950s) was further transported eastward with the Kuroshio and North Pacific Currents within several years of deposition and was accumulated in the eastern North Pacific Ocean until 1967. Subsequently, ¹³⁷Cs concentrations in the eastern North Pacific Ocean decreased due to southward transport. Less radioactively contaminated seawater was also transported northward, upstream of the North Equatorial Current in the western North Pacific Ocean in the 1970s, indicating seawater re-circulation in the North Pacific Gyre.

  20. Calculating the surface tension of binary solutions of simple fluids of comparable size

    NASA Astrophysics Data System (ADS)

    Zaitseva, E. S.; Tovbin, Yu. K.

    2017-11-01

    A molecular theory based on the lattice gas model (LGM) is used to calculate the surface tension of one- and two-component planar vapor-liquid interfaces of simple fluids. Interaction between nearest neighbors is considered in the calculations. LGM is applied as a tool of interpolation: the parameters of the model are corrected using experimental surface tension data. It is found that the average accuracy of describing the surface tension of pure substances (Ar, N2, O2, CH4) and their mixtures (Ar-O2, Ar-N2, Ar-CH4, N2-CH4) does not exceed 2%.

  1. Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2003-01-01

    A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.

  2. BODYFIT-1FE: a computer code for three-dimensional steady-state/transient single-phase rod-bundle thermal-hydraulic analysis. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, B.C.J.; Sha, W.T.; Doria, M.L.

    1980-11-01

    The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less

  3. A three dimensional immersed smoothed finite element method (3D IS-FEM) for fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong

    2013-02-01

    A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.

  4. Computational theory of line drawing interpretation

    NASA Technical Reports Server (NTRS)

    Witkin, A. P.

    1981-01-01

    The recovery of the three dimensional structure of visible surfaces depicted in an image by emphasizing the role of geometric cues present in line drawings, was studied. Three key components are line classification, line interpretation, and surface interpolation. A model for three dimensional line interpretation and surface orientation was refined and a theory for the recovery of surface shape from surface marking geometry was developed. A new approach to the classification of edges was developed and implemented signatures were deduced for each of several edge types, expressed in terms of correlational properties of the image intensities in the vicinity of the edge. A computer program was developed that evaluates image edges as compared with these prototype signatures.

  5. The Role of Amodal Surface Completion in Stereoscopic Transparency

    PubMed Central

    Anderson, Barton L.; Schmid, Alexandra C.

    2012-01-01

    Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829

  6. Effects of deterministic surface distortions on reflector antenna performance

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    Systematic distortions of reflector antenna surfaces can cause antenna radiation patterns to be undesirably different from those of perfectly smooth reflector surfaces. In this paper, a simulation model for systematic distortions is described which permits an efficient computation of the effects of distortions in the reflector pattern. The model uses a vector diffraction physical optics analysis for the determination of both the co-polar and cross-polar fields. An interpolation scheme is also presented for the description of reflector surfaces which are prescribed by discrete points. Representative numerical results are presented for reflectors with sinusoidally and thermally distorted surfaces. Finally, comparisons are made between the measured and calculated patterns of a slowly-varying distorted offset parabolic reflector.

  7. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  8. HEATING 7. 1 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1991-07-01

    HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  9. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  10. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  11. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  12. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  13. Geostatistical interpolation of individual average monthly temperature supported by MODIS MOD11C3 product

    NASA Astrophysics Data System (ADS)

    Perčec Tadić, M.

    2010-09-01

    The increased availability of satellite products of high spatial and temporal resolution together with developing user support, encourages the climatologists to use this data in research and practice. Since climatologists are mainly interested in monthly or even annual averages or aggregates, this high temporal resolution and hence, large amount of data, can be challenging for the less experienced users. Even if the attempt is made to aggregate e. g. the 15' (temporal) MODIS LST (land surface temperature) to daily temperature average, the development of the algorithm is not straight forward and should be done by the experts. Recent development of many temporary aggregated products on daily, several days or even monthly scale substantially decrease the amount of satellite data that needs to be processed and rise the possibility for development of various climatological applications. Here the attempt is presented in incorporating the MODIS satellite MOD11C3 product (Wan, 2009), that is monthly CMG (climate modelling 0.05 degree latitude/longitude grids) LST, as predictor in geostatistical interpolation of climatological data in Croatia. While in previous applications, e. g. in Climate Atlas of Croatia (Zaninović et al. 2008), the static predictors as digital elevation model, distance to the sea, latitude and longitude were used for the interpolation of monthly, seasonal and annual 30-years averages (reference climatology), here the monthly MOD11C3 is used to support the interpolation of the individual monthly average in the regression kriging framework. We believe that this can be a valuable show case of incorporating the remote sensed data for climatological application, especially in the areas that are under-sampled by conventional observations. Zaninović K, Gajić-Čapka M, Perčec Tadić M et al (2008) Klimatski atlas Hrvatske / Climate atlas of Croatia 1961-1990, 1971-2000. Meteorological and Hydrological Service of Croatia, Zagreb, pp 200. Wan Z, 2009: Collection-5 MODIS Land Surface Temperature Products Users' Guide, ICESS, University of California, Santa Barbara, pp 30.

  14. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  15. Quasiclassical trajectory study of the Cl+CH4 reaction dynamics on a quadratic configuration interaction with single and double excitation interpolated potential energy surface.

    PubMed

    Castillo, J F; Aoiz, F J; Bañares, L

    2006-09-28

    An ab initio interpolated potential energy surface (PES) for the Cl+CH(4) reactive system has been constructed using the interpolation method of Collins and co-workers [J. Chem. Phys. 102, 5647 (1995); 108, 8302 (1998); 111, 816 (1999); Theor. Chem. Acc. 108, 313 (2002)]. The ab initio calculations have been performed using quadratic configuration interaction with single and double excitation theory to build the PES. A simple scaling all correlation technique has been used to obtain a PES which yields a barrier height and reaction energy in good agreement with high level ab initio calculations and experimental measurements. Using these interpolated PESs, a detailed quasiclassical trajectory study of integral and differential cross sections, product rovibrational populations, and internal energy distributions has been carried out for the Cl+CH(4) and Cl+CD(4) reactions, and the theoretical results have been compared with the available experimental data. It has been shown that the calculated total reaction cross sections versus collision energy for the Cl+CH(4) and Cl+CD(4) reactions is very sensitive to the barrier height. Besides, due to the zero-point energy (ZPE) leakage of the CH(4) molecule to the reaction coordinate in the quasiclassical trajectory (QCT) calculations, the reaction threshold falls below the barrier height of the PES. The ZPE leakage leads to CH(3) and HCl coproducts with internal energy below its corresponding ZPEs. We have shown that a Gaussian binning (GB) analysis of the trajectories yields excitation functions in somehow better agreement with the experimental determinations. The HCl(v'=0) and DCl(v'=0) rotational distributions are as well very sensitive to the ZPE problem. The GB correction narrows and shifts the rotational distributions to lower values of the rotational quantum numbers. However, the present QCT rotational distributions are still hotter than the experimental distributions. In both reactions the angular distributions shift from backward peaked to sideways peaked as collision energy increases, as seen in the experiments and other theoretical calculations.

  16. Quasiclassical trajectory study of the Cl +CH4 reaction dynamics on a quadratic configuration interaction with single and double excitation interpolated potential energy surface

    NASA Astrophysics Data System (ADS)

    Castillo, J. F.; Aoiz, F. J.; Bañares, L.

    2006-09-01

    An ab initio interpolated potential energy surface (PES) for the Cl +CH4 reactive system has been constructed using the interpolation method of Collins and co-workers [J. Chem. Phys. 102, 5647 (1995); 108, 8302 (1998); 111, 816 (1999); Theor. Chem. Acc. 108, 313 (2002)]. The ab initio calculations have been performed using quadratic configuration interaction with single and double excitation theory to build the PES. A simple scaling all correlation technique has been used to obtain a PES which yields a barrier height and reaction energy in good agreement with high level ab initio calculations and experimental measurements. Using these interpolated PESs, a detailed quasiclassical trajectory study of integral and differential cross sections, product rovibrational populations, and internal energy distributions has been carried out for the Cl +CH4 and Cl +CD4 reactions, and the theoretical results have been compared with the available experimental data. It has been shown that the calculated total reaction cross sections versus collision energy for the Cl +CH4 and Cl +CD4 reactions is very sensitive to the barrier height. Besides, due to the zero-point energy (ZPE) leakage of the CH4 molecule to the reaction coordinate in the quasiclassical trajectory (QCT) calculations, the reaction threshold falls below the barrier height of the PES. The ZPE leakage leads to CH3 and HCl coproducts with internal energy below its corresponding ZPEs. We have shown that a Gaussian binning (GB) analysis of the trajectories yields excitation functions in somehow better agreement with the experimental determinations. The HCl(v'=0) and DCl(v'=0) rotational distributions are as well very sensitive to the ZPE problem. The GB correction narrows and shifts the rotational distributions to lower values of the rotational quantum numbers. However, the present QCT rotational distributions are still hotter than the experimental distributions. In both reactions the angular distributions shift from backward peaked to sideways peaked as collision energy increases, as seen in the experiments and other theoretical calculations.

  17. Nearest neighbor, bilinear interpolation and bicubic interpolation geographic correction effects on LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.

  18. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  19. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  20. Forebody and base region real gas flow in severe planetary entry by a factored implicit numerical method. II - Equilibrium reactive gas

    NASA Technical Reports Server (NTRS)

    Davy, W. C.; Green, M. J.; Lombard, C. K.

    1981-01-01

    The factored-implicit, gas-dynamic algorithm has been adapted to the numerical simulation of equilibrium reactive flows. Changes required in the perfect gas version of the algorithm are developed, and the method of coupling gas-dynamic and chemistry variables is discussed. A flow-field solution that approximates a Jovian entry case was obtained by this method and compared with the same solution obtained by HYVIS, a computer program much used for the study of planetary entry. Comparison of surface pressure distribution and stagnation line shock-layer profiles indicates that the two solutions agree well.

  1. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  2. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Topsoil pollution forecasting using artificial neural networks on the example of the abnormally distributed heavy metal at Russian subarctic

    NASA Astrophysics Data System (ADS)

    Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.

    2017-06-01

    Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.

  4. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  5. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  6. Coastal bathymetry data collected in 2011 from the Chandeleur Islands, Louisiana

    USGS Publications Warehouse

    DeWitt, Nancy T.; Pfeiffer, William R.; Bernier, Julie C.; Buster, Noreen A.; Miselis, Jennifer L.; Flocks, James G.; Reynolds, Billy J.; Wiese, Dana S.; Kelso, Kyle W.

    2014-01-01

    This report serves as an archive of processed interferometric swath and single-beam bathymetry data. Geographic Iinformation System data products include a 50-meter cell-size interpolated bathymetry grid surface, trackline maps, and point data files. Additional files include error analysis maps, Field Activity Collection System logs, and formal Federal Geographic Data Committee metadata.

  7. Measurements of Wave Power in Wave Energy Converter Effectiveness Evaluation

    NASA Astrophysics Data System (ADS)

    Berins, J.; Berins, J.; Kalnacs, A.

    2017-08-01

    The article is devoted to the technical solution of alternative budget measuring equipment of the water surface gravity wave oscillation and the theoretical justification of the calculated oscillation power. This solution combines technologies such as lasers, WEB-camera image digital processing, interpolation of defined function at irregular intervals, volatility of discrete Fourier transformation for calculating the spectrum.

  8. Spatially continuous interpolation of water stage and water depths using the Everglades depth estimation network (EDEN)

    USGS Publications Warehouse

    Pearlstine, Leonard; Higer, Aaron; Palaseanu, Monica; Fujisaki, Ikuko; Mazzotti, Frank

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on a 400-square-meter grid spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. P. Jensen; Toto, T.

    Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less

  10. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level. © 2011 IEEE

  11. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  12. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  13. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  14. Retrieval of surface temperature by remote sensing. [of earth surface using brightness temperature of air pollutants

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1976-01-01

    A simple procedure and computer program were developed for retrieving the surface temperature from the measurement of upwelling infrared radiance in a single spectral region in the atmosphere. The program evaluates the total upwelling radiance at any altitude in the region of the CO fundamental band (2070-2220 1/cm) for several values of surface temperature. Actual surface temperature is inferred by interpolation of the measured upwelling radiance between the computed values of radiance for the same altitude. Sensitivity calculations were made to determine the effect of uncertainty in various surface, atmospheric and experimental parameters on the inferred value of surface temperature. It is found that the uncertainties in water vapor concentration and surface emittance are the most important factors affecting the accuracy of the inferred value of surface temperature.

  15. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  16. Hamiltonian approaches to spatial and temporal discretization of fully compressible equations

    NASA Astrophysics Data System (ADS)

    Dubos, Thomas; Dubey, Sarvesh

    2017-04-01

    The fully compressible Euler (FCE) equations are the most accurate for representing atmospheric motion, compared to approximate systems like the hydrostatic, anelastic or pseudo-incompressible systems. The price to pay for this accuracy is the presence of additional degrees of freedom and high-frequency acoustic waves that must be treated implicitly. In this work we explore a Hamiltonian approach to the issue of stable spatial and temporal discretization of the FCE using a non-Eulerian vertical coordinate. For scalability, a horizontally-explicit, vertically-implicit (HEVI) time discretization is adopted. The Hamiltonian structure of the equations is used to obtain the spatial finite-difference discretization and also in order to identify those terms of the equations of motion that need to be treated implicitly. A novel treatment of the lower boundary condition in the presence of orography is introduced: rather than enforcing a no-normal-flow boundary condition, which couples the horizontal and vertical velocity components and interferes with the HEVI structure, the ground is treated as a flexible surface with arbitrarily large stiffness, resulting in a decoupling of the horizontal and vertical dynamics and yielding a simple implicit problem which can be solved efficiently. Standard test cases performed in a vertical slice configuration suggest that an effective horizontal acoustic Courant number close to 1 can be achieved.

  17. Solar energy microclimate as determined from satellite observations

    NASA Technical Reports Server (NTRS)

    Vonder Haar, T. H.; Ellis, J. S.

    1975-01-01

    A method is presented for determining solar insolation at the earth's surface using satellite broadband visible radiance and cloud imagery data, along with conventional in situ measurements. Conventional measurements are used to both tune satellite measurements and to develop empirical relationships between satellite observations and surface solar insolation. Cloudiness is the primary modulator of sunshine. The satellite measurements as applied in this method consider cloudiness both explicitly and implicitly in determining surface solar insolation at space scales smaller than the conventional pyranometer network.

  18. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  19. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  20. Pixel-based absolute surface metrology by three flat test with shifted and rotated maps

    NASA Astrophysics Data System (ADS)

    Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang

    2018-03-01

    In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.

  1. Control surface in aerial triangulation

    NASA Astrophysics Data System (ADS)

    Jaw, Jen-Jer

    With the increased availability of surface-related sensors, the collection of surface information becomes easier and more straightforward than ever before. In this study, the author proposes a model in which the surface information is integrated into the aerial triangulation workflow by hypothesizing plane observations in the object space, the estimated object points via photo measurements (or matching) together with the adjusted surface points would provide a better point group describing the surface. The algorithms require no special structure of surface points and involve no interpolation process. The suggested measuring strategy (pairwise measurements) results in a quite fluent and favorable working environment when taking measurements. Furthermore, the extension of the model employing the the surface plane finds itself useful in tying photo models. The proposed model has been proven working by the simulation and carried out in the photogrammetric laboratory.

  2. Heating 7.2 user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1993-02-01

    HEATING is a general-purpose conduction heat transfer program written in Fortran 77. HEATING can solve steady-state and/or transient heat conduction problems in one-, two-, or three-dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may also be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heat-generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- andmore » position-dependent. The boundary conditions, which may be surface-to-environment or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General gray-body radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING uses a runtime memory allocation scheme to avoid having to recompile to match memory requirements for each specific problem. HEATING utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution, and conjugate gradient. Transient problems may be solved using any one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method. The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  3. Heating 7. 2 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1993-02-01

    HEATING is a general-purpose conduction heat transfer program written in Fortran 77. HEATING can solve steady-state and/or transient heat conduction problems in one-, two-, or three-dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may also be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heat-generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- andmore » position-dependent. The boundary conditions, which may be surface-to-environment or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General gray-body radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING uses a runtime memory allocation scheme to avoid having to recompile to match memory requirements for each specific problem. HEATING utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution, and conjugate gradient. Transient problems may be solved using any one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method. The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  4. Micromorphological characterization of zinc/silver particle composite coatings.

    PubMed

    Méndez, Alia; Reyes, Yolanda; Trejo, Gabriel; StĘpień, Krzysztof; Ţălu, Ştefan

    2015-12-01

    The aim of this study was to evaluate the three-dimensional (3D) surface micromorphology of zinc/silver particles (Zn/AgPs) composite coatings with antibacterial activity prepared using an electrodeposition technique. These 3D nanostructures were investigated over square areas of 5 μm × 5 μm by atomic force microscopy (AFM), fractal, and wavelet analysis. The fractal analysis of 3D surface roughness revealed that (Zn/AgPs) composite coatings have fractal geometry. Triangulation method, based on the linear interpolation type, applied for AFM data was employed in order to characterise the surfaces topographically (in amplitude, spatial distribution and pattern of surface characteristics). The surface fractal dimension Df , as well as height values distribution have been determined for the 3D nanostructure surfaces. © 2015 The Authors published by Wiley Periodicals, Inc.

  5. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  6. Semidiscrete Galerkin modelling of compressible viscous flow past a circular cone at incidence. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Meade, Andrew James, Jr.

    1989-01-01

    A numerical study of the laminar and compressible boundary layer, about a circular cone in a supersonic free stream, is presented. It is thought that if accurate and efficient numerical schemes can be produced to solve the boundary layer equations, they can be joined to numerical codes that solve the inviscid outer flow. The combination of these numerical codes is competitive with the accurate, but computationally expensive, Navier-Stokes schemes. The primary goal is to develop a finite element method for the calculation of 3-D compressible laminar boundary layer about a yawed cone. The proposed method can, in principle, be extended to apply to the 3-D boundary layer of pointed bodies of arbitrary cross section. The 3-D boundary layer equations governing supersonic free stream flow about a cone are examined. The 3-D partial differential equations are reduced to 2-D integral equations by applying the Howarth, Mangler, Crocco transformations, a linear relation between viscosity, and a Blasius-type of similarity variable. This is equivalent to a Dorodnitsyn-type formulation. The reduced equations are independent of density and curvature effects, and resemble the weak form of the 2-D incompressible boundary layer equations in Cartesian coordinates. In addition the coordinate normal to the wall has been stretched, which reduces the gradients across the layer and provides high resolution near the surface. Utilizing the parabolic nature of the boundary layer equations, a finite element method is applied to the Dorodnitsyn formulation. The formulation is presented in a Petrov-Galerkin finite element form and discretized across the layer using linear interpolation functions. The finite element discretization yields a system of ordinary differential equations in the circumferential direction. The circumferential derivatives are solved by an implicit and noniterative finite difference marching scheme. Solutions are presented for a 15 deg half angle cone at angles of attack of 5 and 10 deg. The numerical solutions assume a laminar boundary layer with free stream Mach number of 7. Results include circumferential distribution of skin friction and surface heat transfer, and cross flow velocity distributions across the layer.

  7. Designing a suite of measurements to understand the critical zone

    NASA Astrophysics Data System (ADS)

    Brantley, S. L.; DiBiase, R.; Russo, T.; Shi, Y.; Lin, H.; Davis, K. J.; Kaye, M.; Hill, L.; Kaye, J.; Neal, A. L.; Eissenstat, D.; Hoagland, B.; Dere, A. L.

    2015-09-01

    Many scientists have begun to refer to the earth surface environment from the upper canopy to the depths of bedrock as the critical zone (CZ). Identification of the CZ as a worthy object of study implicitly posits that the study of the whole earth surface will provide benefits that do not arise when studying the individual parts. To study the CZ, however, requires prioritizing among the measurements that can be made - and we do not generally agree on the priorities. Currently, the Susquehanna Shale Hills Critical Zone Observatory (SSHCZO) is expanding from a small original study area (0.08 km2, Shale Hills catchment), to a much larger watershed (164 km2, Shavers Creek watershed) and is grappling with the necessity of prioritization. This effort is an expansion from a monolithologic first-order forested catchment to a watershed that encompasses several lithologies (shale, sandstone, limestone) and land use types (forest, agriculture). The goal of the project remains the same: to understand water, energy, gas, solute and sediment (WEGSS) fluxes that are occurring today in the context of the record of those fluxes over geologic time as recorded in soil profiles, the sedimentary record, and landscape morphology. Given the small size of the original Shale Hills catchment, the original measurement design resulted in measurement of as many parameters as possible at high temporal and spatial density. In the larger Shavers Creek watershed, however, we must focus the measurements. We describe a strategy of data collection and modelling based on a geomorphological framework that builds on the hillslope as the basic unit. Interpolation and extrapolation beyond specific sites relies on geophysical surveying, remote sensing, geomorphic analysis, the study of natural integrators such as streams, ground waters or air, and application of a suite of CZ models. In essence, we are hypothesizing that pinpointed measurements of a few important variables at strategic locations will allow development of predictive models of CZ behavior. In turn, the measurements and models will reveal how the larger watershed will respond to perturbations both now and into the future.

  8. Designing a suite of measurements to understand the critical zone

    NASA Astrophysics Data System (ADS)

    Brantley, Susan L.; DiBiase, Roman A.; Russo, Tess A.; Shi, Yuning; Lin, Henry; Davis, Kenneth J.; Kaye, Margot; Hill, Lillian; Kaye, Jason; Eissenstat, David M.; Hoagland, Beth; Dere, Ashlee L.; Neal, Andrew L.; Brubaker, Kristen M.; Arthur, Dan K.

    2016-03-01

    Many scientists have begun to refer to the earth surface environment from the upper canopy to the depths of bedrock as the critical zone (CZ). Identification of the CZ as an integral object worthy of study implicitly posits that the study of the whole earth surface will provide benefits that do not arise when studying the individual parts. To study the CZ, however, requires prioritizing among the measurements that can be made - and we do not generally agree on the priorities. Currently, the Susquehanna Shale Hills Critical Zone Observatory (SSHCZO) is expanding from a small original focus area (0.08 km2, Shale Hills catchment), to a larger watershed (164 km2, Shavers Creek watershed) and is grappling with the prioritization. This effort is an expansion from a monolithologic first-order forested catchment to a watershed that encompasses several lithologies (shale, sandstone, limestone) and land use types (forest, agriculture). The goal of the project remains the same: to understand water, energy, gas, solute, and sediment (WEGSS) fluxes that are occurring today in the context of the record of those fluxes over geologic time as recorded in soil profiles, the sedimentary record, and landscape morphology. Given the small size of the Shale Hills catchment, the original design incorporated measurement of as many parameters as possible at high temporal and spatial density. In the larger Shavers Creek watershed, however, we must focus the measurements. We describe a strategy of data collection and modeling based on a geomorphological and land use framework that builds on the hillslope as the basic unit. Interpolation and extrapolation beyond specific sites relies on geophysical surveying, remote sensing, geomorphic analysis, the study of natural integrators such as streams, groundwaters or air, and application of a suite of CZ models. We hypothesize that measurements of a few important variables at strategic locations within a geomorphological framework will allow development of predictive models of CZ behavior. In turn, the measurements and models will reveal how the larger watershed will respond to perturbations both now and into the future.

  9. Comparing Vibrationally Averaged Nuclear Shielding Constants by Quantum Diffusion Monte Carlo and Second-Order Perturbation Theory.

    PubMed

    Ng, Yee-Hong; Bettens, Ryan P A

    2016-03-03

    Using the method of modified Shepard's interpolation to construct potential energy surfaces of the H2O, O3, and HCOOH molecules, we compute vibrationally averaged isotropic nuclear shielding constants ⟨σ⟩ of the three molecules via quantum diffusion Monte Carlo (QDMC). The QDMC results are compared to that of second-order perturbation theory (PT), to see if second-order PT is adequate for obtaining accurate values of nuclear shielding constants of molecules with large amplitude motions. ⟨σ⟩ computed by the two approaches differ for the hydrogens and carbonyl oxygen of HCOOH, suggesting that for certain molecules such as HCOOH where big displacements away from equilibrium happen (internal OH rotation), ⟨σ⟩ of experimental quality may only be obtainable with the use of more sophisticated and accurate methods, such as quantum diffusion Monte Carlo. The approach of modified Shepard's interpolation is also extended to construct shielding constants σ surfaces of the three molecules. By using a σ surface with the equilibrium geometry as a single data point to compute isotropic nuclear shielding constants for each descendant in the QDMC ensemble representing the ground state wave function, we reproduce the results obtained through ab initio computed σ to within statistical noise. Development of such an approach could thereby alleviate the need for any future costly ab initio σ calculations.

  10. Experiment of Rain Retrieval over Land Using Surface Emissivity Map Derived from TRMM TMI and JRA25

    NASA Astrophysics Data System (ADS)

    Furuzawa, Fumie; Masunaga, Hirohiko; Nakamura, Kenji

    2010-05-01

    We are developing a data-set of global land surface emissivity calculated from TRMM TMI brightness temperature (TB) and atmospheric profile data of Japanese 25-year Reanalysis Project (JRA-25) for the region identified as no-rain by TRMM PR, assuming zero cloud liquid water beyond 0-C level. For the evaluation, some characteristics of global monthly emissivity maps, for example, dependency of emissivity on each TMI frequency or each local time or seasonal/annual variation are checked. Moreover, these data are classified based on JRA25 land type or soilwetness and compared. Histogram of polarization difference of emissivity is similar to that of TB and mostly reflects the variability of land type or soil wetness, while histogram of vertical emissivity show a small difference. Next, by interpolating this instantaneous dataset with Gaussian function weighting, we derive an emissivity over neighboring rainy region and assess the interpolated emissivity by running radiative transfer model using PR rain profile and comparing with observed TB. Preliminary rain retrieval from the emissivities for some frequencies and TBs is evaluated based on PR rain profile and TMI rain rate. Moreover, another method is tested to estimate surface temperature from two emissivities, based on their statistical relation for each land type. We will show the results for vertical and horizontal emissivities of each frequency.

  11. Spatial dynamics of the invasive defoliator amber-marked birch leafminer across the Anchorage landscape

    Treesearch

    J.E. Lundquist; R.M. Reich; M. Tuffly

    2012-01-01

    The amber-marked birch leafminer has caused severe infestations of birch species in Anchorage, AK, since 2002. Its spatial distribution has been monitored since 2006 and summarized using interpolated surfaces based on simple kriging. In this study, we developed methods of assessing and describing spatial distribution of the leafminer as they vary from year to year, and...

  12. Error Estimation in an Optimal Interpolation Scheme for High Spatial and Temporal Resolution SST Analyses

    NASA Technical Reports Server (NTRS)

    Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn

    2010-01-01

    Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.

  13. Simulation of interaction between ground water in an alluvial aquifer and surface water in a large braided river

    USGS Publications Warehouse

    Leake, S.A.; Lilly, M.R.

    1995-01-01

    The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.

  14. Hydrographic Surveys for Six Water Bodies in Eastern Nebraska, 2005-07

    USGS Publications Warehouse

    Johnson, Michaela R.; Andersen, Michael J.; Sebree, Sonja K.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Nebraska Department of Environmental Quality, completed hydrographic surveys for six water bodies in eastern Nebraska: Maskenthine Wetland, Olive Creek Lake, Standing Bear Lake, Wagon Train Lake and Wetland, Wildwood Lake, and Yankee Hill Lake and sediment basin. The bathymetric data were collected using a boat-mounted survey-grade fathometer that operated at 200 kHz, and a differentially corrected Global Positioning System with antenna mounted directly above the echo-sounder transducer. Shallow-water and terrestrial areas were surveyed using a Real-Time Kinematic Global Positioning System. The bathymetric, shallow-water, and terrestrial data were processed in a geographic information system to generate a triangulated irregular network representation of the bottom of the water body. Bathymetric contours were interpolated from the triangulated irregular network data using a 2-foot contour interval. Bathymetric contours at the conservation pool elevation for Maskenthine Wetland, Yankee Hill Lake, and Yankee Hill sediment pond also were interpolated in addition to the 2-foot contours. The surface area and storage capacity of each lake or wetland were calculated for 1-foot intervals of water surface elevation and are tabulated in the Appendix for all water bodies.

  15. Compilation of Local Fallout Data from Test Detonations 1945-1962 Extracted from DASA 1251. Volume I. Continental U.S. Tests

    DTIC Science & Technology

    1979-05-01

    fallout patterns by "dot-dash" lines. The time lines are intended to give only a rough average arrival time in hours as estimated from the wind reports and...by interpolation between the H-lI and H+11 hour values. 4. The surface air pressure was 13.10 psi, the temperature -2.O°C and the relative humidity...surface air pressure was 13.04 psi, the temperature -2.8 0 C, and the relative humidity 87%. 17 i’ 17 I

  16. Corrosion Thermodynamics of Magnesium and Alloys from First Principles as a Function of Solvation

    NASA Astrophysics Data System (ADS)

    Limmer, Krista; Williams, Kristen; Andzelm, Jan

    Thermodynamics of corrosion processes occurring on magnesium surfaces, such as hydrogen evolution and water dissociation, have been examined with density functional theory (DFT) to evaluate the effect of impurities and dilute alloying additions. The modeling of corrosion thermodynamics requires examination of species in a variety of chemical and electronic states in order to accurately represent the complex electrochemical corrosion process. In this study, DFT calculations for magnesium corrosion thermodynamics were performed with two DFT codes (VASP and DMol3), with multiple exchange-correlation functionals for chemical accuracy, as well as with various levels of implicit and explicit solvation for surfaces and solvated ions. The accuracy of the first principles calculations has been validated against Pourbaix diagrams constructed from solid, gas and solvated charged ion calculations. For aqueous corrosion, it is shown that a well parameterized implicit solvent is capable of accurately representing all but the first coordinating layer of explicit water for charged ions.

  17. Effect of analysis parameters on non-linear implicit finite element analysis of marine corroded steel plate

    NASA Astrophysics Data System (ADS)

    Islam, Muhammad Rabiul; Sakib-Ul-Alam, Md.; Nazat, Kazi Kaarima; Hassan, M. Munir

    2017-12-01

    FEA results greatly depend on analysis parameters. MSC NASTRAN nonlinear implicit analysis code has been used in large deformation finite element analysis of pitted marine SM490A steel rectangular plate. The effect of two types actual pit shape on parameters of integrity of structure has been analyzed. For 3-D modeling, a proposed method for simulation of pitted surface by probabilistic corrosion model has been used. The result has been verified with the empirical formula proposed by finite element analysis of steel surface generated with different pitted data where analyses have been carried out by the code of LS-DYNA 971. In the both solver, an elasto-plastic material has been used where an arbitrary stress versus strain curve can be defined. In the later one, the material model is based on the J2 flow theory with isotropic hardening where a radial return algorithm is used. The comparison shows good agreement between the two results which ensures successful simulation with comparatively less energy and time.

  18. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  19. Chemisorption of Hydroxide on 2D Materials from DFT Calculations: Graphene versus Hexagonal Boron Nitride.

    PubMed

    Grosjean, Benoit; Pean, Clarisse; Siria, Alessandro; Bocquet, Lydéric; Vuilleumier, Rodolphe; Bocquet, Marie-Laure

    2016-11-17

    Recent nanofluidic experiments revealed strongly different surface charge measurements for boron-nitride (BN) and graphitic nanotubes when in contact with saline and alkaline water (Nature 2013, 494, 455-458; Phys. Rev. Lett. 2016, 116, 154501). These observations contrast with the similar reactivity of a graphene layer and its BN counterpart, using density functional theory (DFT) framework, for intact and dissociative adsorption of gaseous water molecules. Here we investigate, by DFT in implicit water, single and multiple adsorption of anionic hydroxide on single layers. A differential adsorption strength is found in vacuum for the first ionic adsorption on the two materials-chemisorbed on BN while physisorbed on graphene. The effect of implicit solvation reduces all adsorption values, resulting in a favorable (nonfavorable) adsorption on BN (graphene). We also calculate a pK a ≃ 6 for BN in water, in good agreement with experiments. Comparatively, the unfavorable results for graphene in water echo the weaker surface charge measurements but point to an alternative scenario.

  20. Chemisorption of Hydroxide on 2D Materials From DFT Calculations: Graphene Versus Hexagonal Boron Nitride

    PubMed Central

    Grosjean, Benoit; Pean, Clarisse; Siria, Alessandro; Bocquet, Lyderic; Vuilleumier, Rodolphe; Bocquet, Marie-Laure

    2017-01-01

    Recent nanofluidic measurements revealed strongly different surface charge measurements for boron-nitride and graphitic nanotubes when in contact with saline and alkaline water. 1,2 These observations contrast with the similar reactivity of a graphene layer and its boron nitride counterpart, using Density Functional Theory (DFT) framework, for intact and dissociative adsorption of gaseous water molecules. Here, we investigate, by DFT in implicit water, single and multiple adsorption of anionic hydroxide on single layers. A differential adsorption strength is found in vacuum for the first ionic adsorption on the two materials – chemisorbed on BN while physisorbed on graphene. The effect of implicit solvation reduces all adsorption values resulting in a favorable (non-favorable) adsorption on BN (graphene). We also calculate a pKa ≃ 6 for BN in water, in good agreement with experiments. Comparatively, the unfavorable results for graphene in water echoes the weaker surface charge measurements, but points to an alternative scenario. PMID:27809540

  1. Three Dimensional Aerodynamic Analysis of a High-Lift Transport Configuration

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1993-01-01

    Two computational methods, a surface panel method and an Euler method employing unstructured grid methodology, were used to analyze a subsonic transport aircraft in cruise and high-lift conditions. The computational results were compared with two separate sets of flight data obtained for the cruise and high-lift configurations. For the cruise configuration, the surface pressures obtained by the panel method and the Euler method agreed fairly well with results from flight test. However, for the high-lift configuration considerable differences were observed when the computational surface pressures were compared with the results from high-lift flight test. On the lower surface of all the elements with the exception of the slat, both the panel and Euler methods predicted pressures which were in good agreement with flight data. On the upper surface of all the elements the panel method predicted slightly higher suction compared to the Euler method. On the upper surface of the slat, pressure coefficients obtained by both the Euler and panel methods did not agree with the results of the flight tests. A sensitivity study of the upward deflection of the slat from the 40 deg. flap setting suggested that the differences in the slat deflection between the computational model and the flight configuration could be one of the sources of this discrepancy. The computation time for the implicit version of the Euler code was about 1/3 the time taken by the explicit version though the implicit code required 3 times the memory taken by the explicit version.

  2. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  3. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  4. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  5. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  6. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  7. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  8. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  9. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  10. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  11. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  12. Comparison of sEMG processing methods during whole-body vibration exercise.

    PubMed

    Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S

    2015-12-01

    The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  14. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  15. A patient-specific aortic valve model based on moving resistive immersed implicit surfaces.

    PubMed

    Fedele, Marco; Faggiano, Elena; Dedè, Luca; Quarteroni, Alfio

    2017-10-01

    In this paper, we propose a full computational framework to simulate the hemodynamics in the aorta including the valve. Closed and open valve surfaces, as well as the lumen aorta, are reconstructed directly from medical images using new ad hoc algorithms, allowing a patient-specific simulation. The fluid dynamics problem that accounts from the movement of the valve is solved by a new 3D-0D fluid-structure interaction model in which the valve surface is implicitly represented through level set functions, yielding, in the Navier-Stokes equations, a resistive penalization term enforcing the blood to adhere to the valve leaflets. The dynamics of the valve between its closed and open position is modeled using a reduced geometric 0D model. At the discrete level, a finite element formulation is used and the SUPG stabilization is extended to include the resistive term in the Navier-Stokes equations. Then, after time discretization, the 3D fluid and 0D valve models are coupled through a staggered approach. This computational framework, applied to a patient-specific geometry and data, allows to simulate the movement of the valve, the sharp pressure jump occurring across the leaflets, and the blood flow pattern inside the aorta.

  16. Frontal Representation as a Metric of Model Performance

    NASA Astrophysics Data System (ADS)

    Douglass, E.; Mask, A. C.

    2017-12-01

    Representation of fronts detected by altimetry are used to evaluate the performance of the HYCOM global operational product. Fronts are detected and assessed in daily alongtrack altimetry. Then, modeled sea surface height is interpolated to the locations of the alongtrack observations, and the same frontal detection algorithm is applied to the interpolated model output. The percentage of fronts found in the altimetry and replicated in the model gives a score (0-100) that assesses the model's ability to replicate fronts in the proper location with the proper orientation. Further information can be obtained from determining the number of "extra" fronts found in the model but not in the altimetry, and from assessing the horizontal and vertical dimensions of the front in the model as compared to observations. Finally, the sensitivity of this metric to choices regarding the smoothing of noisy alongtrack altimetry observations, and to the minimum size of fronts being analyzed, is assessed.

  17. New, simplified, interpolation method for estimation of microscopic nuclear masses based on the p-factor, P = N/sub P/N/sub N//(N/sub p/+N/sub n/)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.; Brenner, D.S.; Casten, R.F.

    1987-12-10

    A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less

  18. A climatically-derived global soil moisture data set for use in the GLAS atmospheric circulation model seasonal cycle experiment

    NASA Technical Reports Server (NTRS)

    Willmott, C. J.; Field, R. T.

    1984-01-01

    Algorithms for point interpolation and contouring on the surface of the sphere and in Cartesian two-space are developed from Shepard's (1968) well-known, local search method. These mapping procedures then are used to investigate the errors which appear on small-scale climate maps as a result of the all-too-common practice of of interpolating, from irregularly spaced data points to the nodes of a regular lattice, and contouring Cartesian two-space. Using mean annual air temperatures field over the western half of the northern hemisphere is estimated both on the sphere, assumed to be correct, and in Cartesian two-space. When the spherically- and Cartesian-approximted air temperature fields are mapped and compared, the magnitudes (as large as 5 C to 10 C) and distribution of the errors associated with the latter approach become apparent.

  19. Assessing the Suitability and Limitations of Satellite-based Measurements for Estimating CO, CO2, NO2 and O3 Concentrations over the Niger Delta

    NASA Astrophysics Data System (ADS)

    Fagbeja, M. A.; Hill, J. L.; Chatterton, T. J.; Longhurst, J. W.; Akinyede, J. O.

    2011-12-01

    Space-based satellite sensor technology may provide important tools in the study and assessment of national, regional and local air pollution. However, the application of optical satellite sensor observation of atmospheric trace gases, including those considered to be 'air pollutants', within the lower latitudes is limited due to prevailing climatic conditions. The lack of appropriate air pollution ground monitoring stations within the tropical belt reduces the ability to verify and calibrate space-based measurements. This paper considers the suitability of satellite remotely sensed data in estimating concentrations of atmospheric trace gases in view of the prevailing climate over the Niger Delta region. The methodological approach involved identifying suitable satellite data products and using the ArcGIS Geostatistical Analyst kriging interpolation technique to generate surface concentrations from satellite column measurements. The observed results are considered in the context of the climate of the study area. Using data from January 2001 to December 2005, an assessment of the suitability of satellite sensor data to interpolate column concentrations of trace gases over the Niger Delta has been undertaken and indicates varying degrees of reliability. The level of reliability of the interpolated surfaces is predicated on the number and spatial distributions of column measurements. Accounting for the two climatic seasons in the region, the interpolation of total column concentrations of CO and CO2 from SCIAMACHY produced both reliable and unreliable results over inland parts of the region during the dry season, while mainly unreliable results are observed over the coastal parts especially during the rainy season due to inadequate column measurements. The interpolation of tropospheric measurements of NO2 and O3 from GOME and OMI respectively produced reliable results all year. This is thought to be due to the spatial distribution of available column measurements, which were more regularly distributed over the region than the total column measurements of CO and CO2. Observations also indicated higher concentrations during the dry season than the wet seasons. The observed trend in the concentration of tropospheric O3 was as expected, considering the observed concentrations of precursor gases of CO and NO2. Whilst satellites currently play a significant role in the assessment of global air pollution and the long-range transport of air pollutants, the technology is faced with limitations in assessing ground level concentrations of pollutants. These limitations restrict the extent to which both pollution emissions and impacts of receptors can be accurately assessed. Further research is required to improve the capability of satellite sensors to observe atmospheric pollutants within the lower troposphere, where pollution has the most direct impacts on humans and ecosystems.

  20. Hermite-Birkhoff interpolation in the nth roots of unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.

    1980-06-01

    Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.

  1. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  2. Assessing Learning Quality: Reconciling Institutional, Staff and Educational Demands.

    ERIC Educational Resources Information Center

    Biggs, John

    1996-01-01

    Two frameworks for educational assessment distinguished, which is quantitative, adequate for construing some kinds of learning, and qualitative, which is more appropriate for most objectives in higher education. The paper argues that institutions implicitly encourage quantitative assessment, thus encouraging a surface approach to learning although…

  3. Comparison of two fractal interpolation methods

    NASA Astrophysics Data System (ADS)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has strong periodicity, which is suitable for simulating periodic surface.

  4. Remote sensing, geographical information systems, and spatial modeling for analyzing public transit services

    NASA Astrophysics Data System (ADS)

    Wu, Changshan

    Public transit service is a promising transportation mode because of its potential to address urban sustainability. Current ridership of public transit, however, is very low in most urban regions, particularly those in the United States. This woeful transit ridership can be attributed to many factors, among which poor service quality is key. Given this, there is a need for transit planning and analysis to improve service quality. Traditionally, spatially aggregate data are utilized in transit analysis and planning. Examples include data associated with the census, zip codes, states, etc. Few studies, however, address the influences of spatially aggregate data on transit planning results. In this research, previous studies in transit planning that use spatially aggregate data are reviewed. Next, problems associated with the utilization of aggregate data, the so-called modifiable areal unit problem (MAUP), are detailed and the need for fine resolution data to support public transit planning is argued. Fine resolution data is generated using intelligent interpolation techniques with the help of remote sensing imagery. In particular, impervious surface fraction, an important socio-economic indicator, is estimated through a fully constrained linear spectral mixture model using Landsat Enhanced Thematic Mapper Plus (ETM+) data within the metropolitan area of Columbus, Ohio in the United States. Four endmembers, low albedo, high albedo, vegetation, and soil are selected to model heterogeneous urban land cover. Impervious surface fraction is estimated by analyzing low and high albedo endmembers. With the derived impervious surface fraction, three spatial interpolation methods, spatial regression, dasymetric mapping, and cokriging, are developed to interpolate detailed population density. Results suggest that cokriging applied to impervious surface is a better alternative for estimating fine resolution population density. With the derived fine resolution data, a multiple route maximal covering/shortest path (MRMCSP) model is proposed to address the tradeoff between public transit service quality and access coverage in an established bus-based transit system. Results show that it is possible to improve current transit service quality by eliminating redundant or underutilized service stops. This research illustrates that fine resolution data can be efficiently generated to support urban planning, management and analysis. Further, this detailed data may necessitate the development of new spatial optimization models for use in analysis.

  5. Determination of Arctic sea ice thickness in the winter of 2007

    NASA Astrophysics Data System (ADS)

    Calvao, J.; Wadhams, P.; Rodrigues, J.

    2009-04-01

    The L3H phase of operation of ICESat's laser in the winter of 2007 coincided for about two weeks with the cruise of the British submarine Tireless where upward-looking and multibeam sonar systems were mounted, thus providing the first opportunity for a simultaneous determination of the sea ice freeboard and draft in the Arctic Ocean. ICESat satellite carries a laser altimeter dedicated to the observation of polar regions, generating accurate profiles of surface topography along the tracks (footprint diameter 70m), which can be inverted to determine sea-ice freeboard heights using a "lowest level" filtering scheme. The procedure applied to obtain the ice freeboard F=h-N-MDT (where h is the ICESat ellipsoidal height estimate, N is the geoid undulation and MDT is the ocean mean dynamic topography) for the whole Arctic basin (with the exception of points beyond 86N) consisted of a high-pass filtering of the satellite data to remove low frequency effects due to the geoid and ocean dynamics (the geoid model ArcGP with sufficient accuracy to allow the computation of the freeboard was very recently made available). The original tide model was replaced by the tide model AOTIM5 and the tide loading model TPXO6.2. The inverse barometer correction was computed. As there are no MDT models with enough accuracy, it is necessary to identify leads of open water or thin ice to allow the interpolation of the ocean surface, using surface reflectivity and waveform shape. Several solutions were tested to define the ocean surface and the computed freeboard values were interpolated on a 5x5 minute grid, where the submarine track was interpolated. At the same time, along-track single beam upward-looking sonar data were recorded using an Admiralty pattern 780 echo sounder carried by the Tireless, from where we have generated an ice draft profile of about 8,000km between Fram Strait and the North coast of Alaska and back. The merging of the two data sets provides a new insight into the present Arctic sea ice thickness distribution while a comparison with results obtained by previous submarines cruises and previous phases of operations of ICESat allows a fresh evaluation of the rate of sea ice thinning.

  6. Spherical Demons: Fast Surface Registration

    PubMed Central

    Yeo, B.T. Thomas; Sabuncu, Mert; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2009-01-01

    We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast – registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces. PMID:18979813

  7. Spherical demons: fast surface registration.

    PubMed

    Yeo, B T Thomas; Sabuncu, Mert; Vercauteren, Tom; Ayache, Nicholas; Fischl, Bruce; Golland, Polina

    2008-01-01

    We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast - registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.

  8. Interpolations of groundwater table elevation in dissected uplands.

    PubMed

    Chung, Jae-won; Rogers, J David

    2012-01-01

    The variable elevation of the groundwater table in the St. Louis area was estimated using multiple linear regression (MLR), ordinary kriging, and cokriging as part of a regional program seeking to assess liquefaction potential. Surface water features were used to determine the minimum water table for MLR and supplement the principal variables for ordinary kriging and cokriging. By evaluating the known depth to the water and the minimum water table elevation, the MLR analysis approximates the groundwater elevation for a contiguous hydrologic system. Ordinary kriging and cokriging estimate values in unsampled areas by calculating the spatial relationships between the unsampled and sampled locations. In this study, ordinary kriging did not incorporate topographic variations as an independent variable, while cokriging included topography as a supporting covariable. Cross validation suggests that cokriging provides a more reliable estimate at known data points with less uncertainty than the other methods. Profiles extending through the dissected uplands terrain suggest that: (1) the groundwater table generated by MLR mimics the ground surface and elicits a exaggerated interpolation of groundwater elevation; (2) the groundwater table estimated by ordinary kriging tends to ignore local topography and exhibits oversmoothing of the actual undulations in the water table; and (3) cokriging appears to give the realistic water surface, which rises and falls in proportion to the overlying topography. The authors concluded that cokriging provided the most realistic estimate of the groundwater surface, which is the key variable in assessing soil liquefaction potential in unconsolidated sediments. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  9. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  10. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  12. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  13. Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media

    DTIC Science & Technology

    2015-09-24

    TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR

  14. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  15. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  16. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  17. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  18. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  19. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  20. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion.

    PubMed

    Wilson, Jordan L; Samaranayake, V A; Limmer, Matt A; Burken, Joel G

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman's correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years.

  1. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion

    USGS Publications Warehouse

    Wilson, Jordan L.; Samaranayake, V.A.; Limmer, Matthew A.; Burken, Joel G.

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman’s correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years.

  2. Surface modeling of soil antibiotics.

    PubMed

    Shi, Wen-jiao; Yue, Tian-xiang; Du, Zheng-ping; Wang, Zong; Li, Xue-wen

    2016-02-01

    Large numbers of livestock and poultry feces are continuously applied into soils in intensive vegetable cultivation areas, and then some veterinary antibiotics are persistent existed in soils and cause health risk. For the spatial heterogeneity of antibiotic residues, developing a suitable technique to interpolate soil antibiotic residues is still a challenge. In this study, we developed an effective interpolator, high accuracy surface modeling (HASM) combined vegetable types, to predict the spatial patterns of soil antibiotics, using 100 surface soil samples collected from an intensive vegetable cultivation area located in east of China, and the fluoroquinolones (FQs), including ciprofloxacin (CFX), enrofloxacin (EFX) and norfloxacin (NFX), were analyzed as the target antibiotics. The results show that vegetable type is an effective factor to be combined to improve the interpolator performance. HASM achieves less mean absolute errors (MAEs) and root mean square errors (RMSEs) for total FQs (NFX+CFX+EFX), NFX, CFX and EFX than kriging with external drift (KED), stratified kriging (StK), ordinary kriging (OK) and inverse distance weighting (IDW). The MAE of HASM for FQs is 55.1 μg/kg, and the MAEs of KED, StK, OK and IDW are 99.0 μg/kg, 102.8 μg/kg, 106.3 μg/kg and 108.7 μg/kg, respectively. Further, RMSE simulated by HASM for FQs (CFX, EFX and NFX) are 106.2 μg/kg (88.6 μg/kg, 20.4 μg/kg and 39.2 μg/kg), and less 30% (27%, 22% and 36%), 33% (27%, 27% and 43%), 38% (34%, 23% and 41%) and 42% (32%, 35% and 51%) than the ones by KED, StK, OK and IDW, respectively. HASM also provides better maps with more details and more consistent maximum and minimum values of soil antibiotics compared with the measured data. The better performance can be concluded that HASM takes the vegetable type information as global approximate information, and takes local sampling data as its optimum control constraints. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion

    PubMed Central

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman’s correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years. PMID:29451904

  4. Airborne laser scanning for forest health status assessment and radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Novotny, Jan; Zemek, Frantisek; Pikl, Miroslav; Janoutova, Ruzena

    2013-04-01

    Structural parameters of forest stands/ecosystems are an important complementary source of information to spectral signatures obtained from airborne imaging spectroscopy when quantitative assessment of forest stands are in the focus, such as estimation of forest biomass, biochemical properties (e.g. chlorophyll /water content), etc. The parameterization of radiative transfer (RT) models used in latter case requires three-dimensional spatial distribution of green foliage and woody biomass. Airborne LiDAR data acquired over forest sites bears these kinds of 3D information. The main objective of the study was to compare the results from several approaches to interpolation of digital elevation model (DEM) and digital surface model (DSM). We worked with airborne LiDAR data with different density (TopEye Mk II 1,064nm instrument, 1-5 points/m2) acquired over the Norway spruce forests situated in the Beskydy Mountains, the Czech Republic. Three different interpolation algorithms with increasing complexity were tested: i/Nearest neighbour approach implemented in the BCAL software package (Idaho Univ.); ii/Averaging and linear interpolation techniques used in the OPALS software (Vienna Univ. of Technology); iii/Active contour technique implemented in the TreeVis software (Univ. of Freiburg). We defined two spatial resolutions for the resulting coupled raster DEMs and DSMs outputs: 0.4 m and 1 m, calculated by each algorithm. The grids correspond to the same spatial resolutions of hyperspectral imagery data for which the DEMs were used in a/geometrical correction and b/building a complex tree models for radiative transfer modelling. We applied two types of analyses when comparing between results from the different interpolations/raster resolution: 1/calculated DEM or DSM between themselves; 2/comparison with field data: DEM with measurements from referential GPS, DSM - field tree alometric measurements, where tree height was calculated as DSM-DEM. The results of the analyses show that: 1/averaging techniques tend to underestimate the tree height and the generated surface does not follow the first LiDAR echoes both for 1 m and 0.4 m pixel size; 2/we did not find any significant difference between tree heights calculated by nearest neighbour algorithm and the active contour technique for 1 m pixel output but the difference increased with finer resolution (0.4 m); 3/the accuracy of the DEMs calculated by tested algorithms is similar.

  5. Estimates of Flow Duration, Mean Flow, and Peak-Discharge Frequency Values for Kansas Stream Locations

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.

  6. Students Using Multimodal Literacies to Surface Micronarratives of United States Immigration

    ERIC Educational Resources Information Center

    Ghiso, Maria Paula; Low, David E.

    2013-01-01

    This article explores how immigrant students in the United States utilise multimodal literacy practices to complicate dominant narratives of American national identity--narratives of facile assimilation, meritocracy and linear trajectories. Such ideologies can be explicitly evident in curricular materials or can be woven more implicitly into…

  7. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  8. On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians

    NASA Astrophysics Data System (ADS)

    Valverde, Clodoaldo; Baseia, Basílio

    2018-01-01

    We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.

  9. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  10. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  11. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  12. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  13. Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations

    DTIC Science & Technology

    2008-02-01

    Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates

  14. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  15. Multidimensional directional flux weighted upwind scheme for multiphase flow modeling in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Jin, G.

    2012-12-01

    Multiphase flow modeling is an important numerical tool for a better understanding of transport processes in the fields including, but not limited to, petroleum reservoir engineering, remedy of ground water contamination, and risk evaluation of greenhouse gases such as CO2 injected into deep saline reservoirs. However, accurate numerical modeling for multiphase flow remains many challenges that arise from the inherent tight coupling and strong non-linear nature of the governing equations and the highly heterogeneous media. The existence of counter current flow which is caused by the effect of adverse relative mobility contrast and gravitational and capillary forces will introduce additional numerical instability. Recently multipoint flux approximation (MPFA) has become a subject of extensive research and has been demonstrated with great success in reducing considerable grid orientation effects compared to the conventional single point upstream (SPU) weighting scheme, especially in higher dimensions. However, the present available MPFA schemes are mathematically targeted to certain types of grids in two dimensions, a more general form of MPFA scheme is needed for both 2-D and 3-D problems. In this work a new upstream weighting scheme based on multipoint directional incoming fluxes is proposed which incorporates full permeability tensor to account for the heterogeneity of the porous media. First, the multiphase governing equations are decoupled into an elliptic pressure equation and a hyperbolic or parabolic saturation depends on whether the gravitational and capillary pressures are presented or not. Next, a dual secondary grid (called finite volume grid) is formulated from a primary grid (called finite element grid) to create interaction regions for each grid cell over the entire simulation domain. Such a discretization must ensure the conservation of mass and maintain the continuity of the Darcy velocity across the boundaries between neighboring interaction regions. The pressure field is then implicitly calculated from the pressure equation, which in turn results in the derived velocity field for directional flux calculation at each grid node. Directional flux at the center of each interaction surface is also calculated by interpolation from the element nodal fluxes using shape functions. The MPFA scheme is performed by a specific linear combination of all incoming fluxes into the upstream cell represented by either nodal fluxes or interpolated surface boundary fluxes to produce an upwind directional fluxed weighted relative mobility at the center of the interaction region boundary. Such an upwind weighted relative mobility is then used for calculating the saturations of each fluid phase explicitly. The proposed upwind weighting scheme has been implemented into a mixed finite element-finite volume (FE-FV) method, which allows for handling complex reservoir geometry with second-order accuracies in approximating primary variables. The numerical solver has been tested with several bench mark test problems. The application of the proposed scheme to migration path analysis of CO2 injected into deep saline reservoirs in 3-D has demonstrated its ability and robustness in handling multiphase flow with adverse mobility contrast in highly heterogeneous porous media.

  16. [Monitoring the thermal plume from coastal nuclear power plant using satellite remote sensing data: modeling, and validation].

    PubMed

    Zhu, Li; Zhao, Li-Min; Wang, Qiao; Zhang, Ai-Ling; Wu, Chuan-Qing; Li, Jia-Guo; Shi, Ji-Xiang

    2014-11-01

    Thermal plume from coastal nuclear power plant is a small-scale human activity, mornitoring of which requires high-frequency and high-spatial remote sensing data. The infrared scanner (IRS), on board of HJ-1B, has an infrared channel IRS4 with 300 m and 4-days as its spatial and temporal resolution. Remote sensing data aquired using IRS4 is an available source for mornitoring thermal plume. Retrieval pattern for coastal sea surface temperature (SST) was built to monitor the thermal plume from nuclear power plant. The research area is located near Guangdong Daya Bay Nuclear Power Station (GNPS), where synchronized validations were also implemented. The National Centers for Environmental Prediction (NCEP) data was interpolated spatially and temporally. The interpolated data as well as surface weather conditions were subsequently employed into radiative transfer model for the atmospheric correction of IRS4 thermal image. A look-up-table (LUT) was built for the inversion between IRS4 channel radiance and radiometric temperature, and a fitted function was also built from the LUT data for the same purpose. The SST was finally retrieved based on those preprocessing procedures mentioned above. The bulk temperature (BT) of 84 samples distributed near GNPS was shipboard collected synchronically using salinity-temperature-deepness (CTD) instruments. The discrete sample data was surface interpolated and compared with the satellite retrieved SST. Results show that the average BT over the study area is 0.47 degrees C higher than the retrieved skin temperature (ST). For areas far away from outfall, the ST is higher than BT, with differences less than 1.0 degrees C. The main driving force for temperature variations in these regions is solar radiation. For areas near outfall, on the contrary, the retrieved ST is lower than BT, and greater differences between the two (meaning > 1.0 degrees C) happen when it gets closer to the outfall. Unlike the former case, the convective heat transfer resulting from the thermal plume is the primary reason leading to the temperature variations. Temperature rising (TR) distributions obtained from remote sensing data and in-situ measurements are consistent, except that the interpolated BT shows more level details (> 5 levels) than that of the ST (up to 4 levels). The areas with higher TR levels (> 2) are larger on BT maps, while for lower TR levels (≤ 2), the two methods perform with no obvious differences. Minimal errors for satellite-derived SST occur regularly around local time 10 a. m. This makes the remote sensing results to be substitutes for in-situ measurements. Therefore, for operational applications of HJ-1B IRS4, remote sensing technique can be a practical approach to monitoring the nuclear plant thermal pollution around this time period.

  17. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  18. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  19. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  20. Surface Modification Concepts for Enhancement of the High-Temperature Corrosion Resistance of Gas Turbine Superalloys,

    DTIC Science & Technology

    1980-12-01

    now developed to the point where they could be considered as true engineering materials. ** Nickel-based alloys are used for turbine blading and...Introduction Implicit in the design of modern gas turbine engines is the premise that their aerofoil components, made of nickel- and cobalt-based...the deposit. Hot corrosion is a principal process of degradation of aerofoil surface integrity in gas turbine engines . 2.2 Mechanisms of Hot Corrosion

  1. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  2. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  3. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  4. Quantum realization of the nearest neighbor value interpolation method for INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping

    2018-07-01

    This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.

  5. Surface morphology of a modified ballistic deposition model.

    PubMed

    Banerjee, Kasturi; Shamanna, J; Ray, Subhankar

    2014-08-01

    The surface and bulk properties of a modified ballistic deposition model are investigated. The deposition rule interpolates between nearest- and next-nearest-neighbor ballistic deposition and the random deposition models. The stickiness of the depositing particle is controlled by a parameter and the type of interparticle force. Two such forces are considered: Coulomb and van der Waals type. The interface width shows three distinct growth regions before eventual saturation. The rate of growth depends more strongly on the stickiness parameter than on the type of interparticle force. However, the porosity of the deposits is strongly influenced by the interparticle force.

  6. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Murman, S. M.; Berger, M. J.

    2003-01-01

    This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.

  7. An Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Shkarayev, S.; Krashantisa, R.; Tessler, A.

    2004-01-01

    An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.

  8. BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-06-01

    Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.

  9. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  10. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  11. Observed Trend in Surface Wind Speed Over the Conterminous USA and CMIP5 Simulations

    NASA Technical Reports Server (NTRS)

    Hashimoto, Hirofumi; Nemani, Ramakrishna R.

    2016-01-01

    There has been no spatial surface wind map even over the conterminous USA due to the difficulty of spatial interpolation of wind field. As a result, the reanalysis data were often used to analyze the statistics of spatial pattern in surface wind speed. Unfortunately, no consistent trend in wind field was found among the available reanalysis data, and that obstructed the further analysis or projection of spatial pattern of wind speed. In this study, we developed the methodology to interpolate the observed wind speed data at weather stations using random forest algorithm. We produced the 1-km daily climate variables over the conterminous USA from 1979 to 2015. The validation using Ameriflux daily data showed that R2 is 0.59. Existing studies have found the negative trend over the Eastern US, and our study also showed same results. However, our new datasets also revealed the significant increasing trend over the southwest US especially from April to June. The trend in the southwestern US represented change or seasonal shift in North American Monsoon. Global analysis of CMIP5 data projected the decrease trend in mid-latitude, while increase trend in tropical region over the land. Most likely because of the low resolution in GCM, CMIP5 data failed to simulate the increase trend in the southwest US, even though it was qualitatively predicted that pole ward shift of anticyclone help the North American Monsoon.

  12. Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks

    NASA Astrophysics Data System (ADS)

    Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas

    2017-03-01

    PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.

  13. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  14. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  15. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  16. Detection of Spatially Unresolved (Nominally Sub-Pixel) Submerged and Surface Targets Using Hyperspectral Data

    DTIC Science & Technology

    2012-09-01

    Feasibility (MT Modeling ) a. Continuum of mixture distributions interpolated b. Mixture infeasibilities calculated for each pixel c. Valid detections...Visible/Infrared Imaging Spectrometer BRDF Bidirectional Reflectance Distribution Function CASI Compact Airborne Spectrographic Imager CCD...filtering (MTMF), and was designed by Healey and Slater (1999) to use “a physical model to generate the set of sensor spectra for a target that will be

  17. Data Descriptor: TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    Treesearch

    John T. Abatzoglou; Solomon Z. Dobrowski; Sean A. Parks; Katherine C. Hegewisch

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958–2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from...

  18. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  19. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  20. Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Niu, Xiamu

    2016-01-01

    This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.

  1. Pycortex: an interactive surface visualizer for fMRI

    PubMed Central

    Gao, James S.; Huth, Alexander G.; Lescroart, Mark D.; Gallant, Jack L.

    2015-01-01

    Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software. PMID:26483666

  2. Micromorphological characterization of zinc/silver particle composite coatings

    PubMed Central

    Méndez, Alia; Reyes, Yolanda; Trejo, Gabriel; StĘpień, Krzysztof

    2015-01-01

    ABSTRACT The aim of this study was to evaluate the three‐dimensional (3D) surface micromorphology of zinc/silver particles (Zn/AgPs) composite coatings with antibacterial activity prepared using an electrodeposition technique. These 3D nanostructures were investigated over square areas of 5 μm × 5 μm by atomic force microscopy (AFM), fractal, and wavelet analysis. The fractal analysis of 3D surface roughness revealed that (Zn/AgPs) composite coatings have fractal geometry. Triangulation method, based on the linear interpolation type, applied for AFM data was employed in order to characterise the surfaces topographically (in amplitude, spatial distribution and pattern of surface characteristics). The surface fractal dimension D f, as well as height values distribution have been determined for the 3D nanostructure surfaces. Microsc. Res. Tech. 78:1082–1089, 2015. © 2015 The Authors published by Wiley Periodicals, Inc. PMID:26500164

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  4. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  5. Scale issues in soil hydrology related to measurement and simulation: A case study in Colorado

    USDA-ARS?s Scientific Manuscript database

    State variables, such as soil water content (SWC), are typically measured or inferred at very small scales while being simulated at larger scales relevant to spatial management or hillslope areas. Thus there is an implicit spatial disparity that is often ignored. Surface runoff, on the other hand, ...

  6. Behind a High School Literacy Policy: The Surfacing of a Hidden Curriculum.

    ERIC Educational Resources Information Center

    Simon, Roger I.; Willinsky, John

    1980-01-01

    Argues that the articulation of school language policies deserves careful attention as they implicitly formulate a hidden curriculum with reference to the relation between education and society. Discusses issues in context of policy developed in an urban high school in Ontario. Considers cultural values, social convention, and social control.…

  7. Scaffolded Inquiry-Based Instruction with Technology: A Signature Pedagogy for STEM Education

    ERIC Educational Resources Information Center

    Crippen, Kent J.; Archambault, Leanna

    2012-01-01

    Inquiry-based instruction has become a hallmark of science education and increasingly of integrated content areas, including science, technology, engineering, and mathematics (STEM) education. Because inquiry-based instruction very clearly contains surface, deep, and implicit structures as well as engages students to think and act like scientists,…

  8. The P600 in Implicit Artificial Grammar Learning.

    PubMed

    Silva, Susana; Folia, Vasiliki; Hagoort, Peter; Petersson, Karl Magnus

    2017-01-01

    The suitability of the artificial grammar learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and after (preference and grammaticality classification) exposure to a grammar. After exposure, a typical, centroparietal P600 effect was elicited by grammatical violations and not by unfamiliar subsequences, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere-exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test. Copyright © 2016 Cognitive Science Society, Inc.

  9. A three-dimensional application with the numerical grid generation code: EAGLE (utilizing an externally generated surface)

    NASA Technical Reports Server (NTRS)

    Houston, Johnny L.

    1990-01-01

    Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) is a multiblock grid generation and steady-state flow solver system. This system combines a boundary conforming surface generation, a composite block structure grid generation scheme, and a multiblock implicit Euler flow solver algorithm. The three codes are intended to be used sequentially from the definition of the configuration under study to the flow solution about the configuration. EAGLE was specifically designed to aid in the analysis of both freestream and interference flow field configurations. These configurations can be comprised of single or multiple bodies ranging from simple axisymmetric airframes to complex aircraft shapes with external weapons. Each body can be arbitrarily shaped with or without multiple lifting surfaces. Program EAGLE is written to compile and execute efficiently on any CRAY machine with or without Solid State Disk (SSD) devices. Also, the code uses namelist inputs which are supported by all CRAY machines using the FORTRAN Compiler CF177. The use of namelist inputs makes it easier for the user to understand the inputs and to operate Program EAGLE. Recently, the Code was modified to operate on other computers, especially the Sun Spare4 Workstation. Several two-dimensional grid configurations were completely and successfully developed using EAGLE. Currently, EAGLE is being used for three-dimension grid applications.

  10. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  11. Longitudinal curvature and displacement speed effects on incompressible laminar boundary layers.

    NASA Technical Reports Server (NTRS)

    Werle, M. J.; Wornom, S. F.

    1972-01-01

    The title problem is considered for the case of flow past a circular cylinder placed normal to a uniform mainstream with Reynolds numbers from 40 to 200. Implicit finite difference numerical solutions are obtained for a set of boundary-layer equations that account for the second order effects associated with surface curvature and displacement speed. It was found that both of these contributors have a significant influence on the internal structure of the viscous region and that an accurate estimate of the surface pressure distribution is essential for estimating the surface shear stress.

  12. Calculation of Protein Heat Capacity from Replica-Exchange Molecular Dynamics Simulations with Different Implicit Solvent Models

    DTIC Science & Technology

    2008-10-30

    rigorous Poisson-based methods generally apply a Lee-Richards mo- lecular surface.9 This surface is considered the de facto description for continuum...definition and calculation of the Born radii. To evaluate the Born radii, two approximations are invoked. The first is the Coulomb field approximation (CFA...energy term, and depending on the particular GB formulation, higher-order non- Coulomb correction terms may be added to the Born radii to account for the

  13. Improving the Efficiency of Non-equilibrium Sampling in the Aqueous Environment via Implicit-Solvent Simulations.

    PubMed

    Liu, Hui; Chen, Fu; Sun, Huiyong; Li, Dan; Hou, Tingjun

    2017-04-11

    By means of estimators based on non-equilibrium work, equilibrium free energy differences or potentials of mean force (PMFs) of a system of interest can be computed from biased molecular dynamics (MD) simulations. The approach, however, is often plagued by slow conformational sampling and poor convergence, especially when the solvent effects are taken into account. Here, as a possible way to alleviate the problem, several widely used implicit-solvent models, which are derived from the analytic generalized Born (GB) equation and implemented in the AMBER suite of programs, were employed in free energy calculations based on non-equilibrium work and evaluated for their abilities to emulate explicit water. As a test case, pulling MD simulations were carried out on an alanine polypeptide with different solvent models and protocols, followed by comparisons of the reconstructed PMF profiles along the unfolding coordinate. The results show that when employing the non-equilibrium work method, sampling with an implicit-solvent model is several times faster and, more importantly, converges more rapidly than that with explicit water due to reduction of dissipation. Among the assessed GB models, the Neck variants outperform the OBC and HCT variants in terms of accuracy, whereas their computational costs are comparable. In addition, for the best-performing models, the impact of the solvent-accessible surface area (SASA) dependent nonpolar solvation term was also examined. The present study highlights the advantages of implicit-solvent models for non-equilibrium sampling.

  14. Comparison of MM/GBSA calculations based on explicit and implicit solvent simulations.

    PubMed

    Godschalk, Frithjof; Genheden, Samuel; Söderhjelm, Pär; Ryde, Ulf

    2013-05-28

    Molecular mechanics with generalised Born and surface area solvation (MM/GBSA) is a popular method to calculate the free energy of the binding of ligands to proteins. It involves molecular dynamics (MD) simulations with an explicit solvent of the protein-ligand complex to give a set of snapshots for which energies are calculated with an implicit solvent. This change in the solvation method (explicit → implicit) would strictly require that the energies are reweighted with the implicit-solvent energies, which is normally not done. In this paper we calculate MM/GBSA energies with two generalised Born models for snapshots generated by the same methods or by explicit-solvent simulations for five synthetic N-acetyllactosamine derivatives binding to galectin-3. We show that the resulting energies are very different both in absolute and relative terms, showing that the change in the solvent model is far from innocent and that standard MM/GBSA is not a consistent method. The ensembles generated with the various solvent models are quite different with root-mean-square deviations of 1.2-1.4 Å. The ensembles can be converted to each other by performing short MD simulations with the new method, but the convergence is slow, showing mean absolute differences in the calculated energies of 6-7 kJ mol(-1) after 2 ps simulations. Minimisations show even slower convergence and there are strong indications that the energies obtained from minimised structures are different from those obtained by MD.

  15. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  16. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  17. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  18. The 3D geological model of the 1963 Vajont rockslide, reconstructed with implicit surface methods

    NASA Astrophysics Data System (ADS)

    Bistacchi, Andrea; Massironi, Matteo; Francese, Roberto; Giorgi, Massimo; Taller, Claudio

    2015-04-01

    The Vajont rockslide has been the object of several studies because of its catastrophic consequences and of its particular evolution. Several qualitative or quantitative models have been presented in the last 50 years, but a complete explanation of all the relevant geological and mechanical processes remains elusive. In order to better understand the mechanics and dynamics of the 1963 event, we have reconstructed the first 3D geological model of the rockslide, which allowed us to accurately investigate the rockslide structure and kinematics. The input data for the model consisted in: pre- and post-rockslide geological maps, pre- and post-rockslide orthophotos, pre- and post-rockslide digital elevation models, structural data, boreholes, and geophysical data (2D and 3D seismics and resistivity). All these data have been integrated in a 3D geological model implemented in Gocad®, using the implicit surface modelling method. Results of the 3D geological model include the depth and geometry of the sliding surface, the volume of the two lobes of the rockslide accumulation, kinematics of the rockslide in terms of the vector field of finite displacement, and high quality meshes useful for mechanical and hydrogeological simulations. The latter can include information about the stratigraphy and internal structure of the rock masses and allow tracing the displacement of different material points in the rockslide from the pre-1963-failure to the post-rockslide state. As a general geological conclusion, we may say that the 3D model allowed us to recognize very effectively a sliding surface, whose non-planar geometry is affected by the interference pattern of two regional-scale fold systems. The rockslide is partitioned into two distinct and internally continuous rock masses with a distinct kinematics, which were characterised by a very limited internal deformation during the slide. The continuity of these two large blocks points to a very localized deformation, occurring along a thin, continuous and weak cataclastic horizon. Finally, the chosen modelling strategy, based on both traditional "explicit" and implicit techniques, was found to be very effective for reconstructing complex folded and faulted geological structures, and could be applied also to other geological environments.

  19. Analysis of the ability of large-scale reanalysis data to define Siberian fire danger in preparation for future fire prediction

    NASA Astrophysics Data System (ADS)

    Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.

  20. Spatial interpolation techniques using R

    EPA Science Inventory

    Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...

  1. Surface temperature dataset for North America obtained by application of optimal interpolation algorithm merging tree-ring chronologies and climate model output

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2017-02-01

    A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.

  2. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    NASA Astrophysics Data System (ADS)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  3. The OceanFlux Greenhouse Gases methodology for deriving a sea surface climatology of CO2 fugacity in support of air-sea gas flux studies

    NASA Astrophysics Data System (ADS)

    Goddijn-Murphy, L. M.; Woolf, D. K.; Land, P. E.; Shutler, J. D.; Donlon, C.

    2015-07-01

    Climatologies, or long-term averages, of essential climate variables are useful for evaluating models and providing a baseline for studying anomalies. The Surface Ocean CO2 Atlas (SOCAT) has made millions of global underway sea surface measurements of CO2 publicly available, all in a uniform format and presented as fugacity, fCO2. As fCO2 is highly sensitive to temperature, the measurements are only valid for the instantaneous sea surface temperature (SST) that is measured concurrently with the in-water CO2 measurement. To create a climatology of fCO2 data suitable for calculating air-sea CO2 fluxes, it is therefore desirable to calculate fCO2 valid for a more consistent and averaged SST. This paper presents the OceanFlux Greenhouse Gases methodology for creating such a climatology. We recomputed SOCAT's fCO2 values for their respective measurement month and year using monthly composite SST data on a 1° × 1° grid from satellite Earth observation and then extrapolated the resulting fCO2 values to reference year 2010. The data were then spatially interpolated onto a 1° × 1° grid of the global oceans to produce 12 monthly fCO2 distributions for 2010, including the prediction errors of fCO2 produced by the spatial interpolation technique. The partial pressure of CO2 (pCO2) is also provided for those who prefer to use pCO2. The CO2 concentration difference between ocean and atmosphere is the thermodynamic driving force of the air-sea CO2 flux, and hence the presented fCO2 distributions can be used in air-sea gas flux calculations together with climatologies of other climate variables.

  4. Probabilistic reconstruction of GPS vertical ground motion and comparison with GIA models

    NASA Astrophysics Data System (ADS)

    Husson, Laurent; Bodin, Thomas; Choblet, Gael; Kreemer, Corné

    2017-04-01

    The vertical position time-series of GPS stations have become long enough for many parts of the world to infer modern rates of vertical ground motion. We use the worldwide compilation of GPS trend velocities of the Nevada Geodetic Laboratory. Those rates are inferred by applying the MIDAS algorithm (Blewitt et al., 2016) to time-series obtained from publicly available data from permanent stations. Because MIDAS filters out seasonality and discontinuities, regardless of their causes, it gives robust long-term rates of vertical ground motion (except where there is significant postseismic deformation). As the stations are unevenly distributed, and because data errors are also highly variable, sometimes to an unknown degree, we use a Bayesian inference method to reconstruct 2D maps of vertical ground motion. Our models are based on a Voronoi tessellation and self-adapt to the spatially variable level of information provided by the data. Instead of providing a unique interpolated surface, each point of the reconstructed surface is defined through a probability density function. We apply our method to a series of vast regions covering entire continents. Not surprisingly, the reconstructed surface at a long wavelength is dominated by the GIA. This result can be exploited to evaluate whether forward models of GIA reproduce geodetic rates within the uncertainties derived from our interpolation, not only at high latitudes where postglacial rebound is fast, but also in more temperate latitudes where, for instance, such rates may compete with modern sea level rise. At shorter wavelengths, the reconstructed surface of vertical ground motion features a variety of identifiable patterns, whose geometries and rates can be mapped. Examples are transient dynamic topography over the convecting mantle, actively deforming domains (mountain belts and active margins), volcanic areas, or anthropogenic contributions.

  5. Evaporation variability of Nam Co Lake in the Tibetan Plateau and its role in recent rapid lake expansion

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Szilagyi, Jozsef; Niu, Guo-Yue; Zhang, Yinsheng; Zhang, Teng; Wang, Binbin; Wu, Yanhong

    2016-06-01

    Previous studies have shown that the majority of the lakes in the Tibetan Plateau (TP) started to expand rapidly since the late 1990s. However, the causes are still not well known. For Nam Co, being a closed lake with no outflow, evaporation (EL) over the lake surface is the only way water may leave the lake. Therefore, quantifying EL is key for investigating the mechanism of lake expansion in the TP. EL can be quantified by Penman- and/or bulk-transfer-type models, requiring only net radiation, temperature, humidity and wind speed for inputs. However, interpolation of wind speed data may be laden with great uncertainty due to extremely sparse ground meteorological observations, the highly heterogeneous landscape and lake-land breeze effects. Here, evaporation of Nam Co Lake was investigated within the 1979-2012 period at a monthly time-scale using the complementary relationship lake evaporation (CRLE) model which does not require wind speed data. Validations by in-situ observations of E601B pan evaporation rates at the shore of Nam Co Lake as well as measured EL over an adjacent small lake using eddy covariance technique suggest that CRLE is capable of simulating EL well since it implicitly considers wind effects on evaporation via its vapor transfer coefficient. The multi-year average of annual evaporation of Nam Co Lake is 635 mm. From 1979 to 2012, annual evaporation of Nam Co Lake expressed a very slight decreasing trend. However, a more significant decrease in EL occurred during 1998-2008 at a rate of -12 mm yr-1. Based on water-level readings, this significant decrease in lake evaporation was found to be responsible for approximately 4% of the reported rapid water level increase and areal expansion of Nam Co Lake during the same period.

  6. Segmentation of real-time three-dimensional ultrasound for quantification of ventricular function: a clinical study on right and left ventricles.

    PubMed

    Angelini, Elsa D; Homma, Shunichi; Pearson, Gregory; Holmes, Jeffrey W; Laine, Andrew F

    2005-09-01

    Among screening modalities, echocardiography is the fastest, least expensive and least invasive method for imaging the heart. A new generation of three-dimensional (3-D) ultrasound (US) technology has been developed with real-time 3-D (RT3-D) matrix phased-array transducers. These transducers allow interactive 3-D visualization of cardiac anatomy and fast ventricular volume estimation without tomographic interpolation as required with earlier 3-D US acquisition systems. However, real-time acquisition speed is performed at the cost of decreasing spatial resolution, leading to echocardiographic data with poor definition of anatomical structures and high levels of speckle noise. The poor quality of the US signal has limited the acceptance of RT3-D US technology in clinical practice, despite the wealth of information acquired by this system, far greater than with any other existing echocardiography screening modality. We present, in this work, a clinical study for segmentation of right and left ventricular volumes using RT3-D US. A preprocessing of the volumetric data sets was performed using spatiotemporal brushlet denoising, as presented in previous articles Two deformable-model segmentation methods were implemented in 2-D using a parametric formulation and in 3-D using an implicit formulation with a level set implementation for extraction of endocardial surfaces on denoised RT3-D US data. A complete and rigorous validation of the segmentation methods was carried out for quantification of left and right ventricular volumes and ejection fraction, including comparison of measurements with cardiac magnetic resonance imaging as the reference. Results for volume and ejection fraction measurements report good performance of quantification of cardiac function on RT3-D data compared with magnetic resonance imaging with better performance of semiautomatic segmentation methods than with manual tracing on the US data.

  7. A Non-hydrostatic Atmospheric Model for Global High-resolution Simulation

    NASA Astrophysics Data System (ADS)

    Peng, X.; Li, X.

    2017-12-01

    A three-dimensional non-hydrostatic atmosphere model, GRAPES_YY, is developed on the spherical Yin-Yang grid system in order to enforce global high-resolution weather simulation or forecasting at the CAMS/CMA. The quasi-uniform grid makes the computation be of high efficiency and free of pole problem. Full representation of the three-dimensional Coriolis force is considered in the governing equations. Under the constraint of third-order boundary interpolation, the model is integrated with the semi-implicit semi-Lagrangian method using the same code on both zones. A static halo region is set to ensure computation of cross-boundary transport and updating Dirichlet-type boundary conditions in the solution process of elliptical equations with the Schwarz method. A series of dynamical test cases, including the solid-body advection, the balanced geostrophic flow, zonal flow over an isolated mountain, development of the Rossby-Haurwitz wave and a baroclinic wave, are carried out, and excellent computational stability and accuracy of the dynamic core has been confirmed. After implementation of the physical processes of long and short-wave radiation, cumulus convection, micro-physical transformation of water substances and the turbulent processes in the planetary boundary layer include surface layer vertical fluxes parameterization, a long-term run of the model is then put forward under an idealized aqua-planet configuration to test the model physics and model ability in both short-term and long-term integrations. In the aqua-planet experiment, the model shows an Earth-like structure of circulation. The time-zonal mean temperature, wind components and humidity illustrate reasonable subtropical zonal westerly jet, meridional three-cell circulation, tropical convection and thermodynamic structures. The specific SST and solar insolation being symmetric about the equator enhance the ITCZ and tropical precipitation, which concentrated in tropical region. Additional analysis and tuning of the model is still going on, and preliminary results have demonstrated the possibility of high-resolution application of the model to global weather prediction and even seasonal climate projection.

  8. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  9. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  10. Analysis of the numerical differentiation formulas of functions with large gradients

    NASA Astrophysics Data System (ADS)

    Tikhovskaya, S. V.

    2017-10-01

    The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.

  11. Understanding the Role of Reservoir Size on Probable Maximum Precipitation

    NASA Astrophysics Data System (ADS)

    Woldemichael, A. T.; Hossain, F.

    2011-12-01

    This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the formation of mesoscale convective systems (MCS) in the vicinity of dams/reservoirs that may have explicitly been triggered by their presence. The significance of this finding is that water resources managers need to consider the post-dam impact of water cycle and local climate due to the very reservoir and land use change triggered if efficient water resources management is desired. Future works of the study will include incorporation of the anthropogenic changes that occur as a result of the presence of dams/reservoirs in the forms of irrigation, urbanization and downstream wetland reduction. Similar hypothesis testing procedures will be applied to understand the combined effects of the reservoir size variation and anthropogenic changes in the extreme precipitation patterns.

  12. Construction of a 3-arcsecond digital elevation model for the Gulf of Maine

    USGS Publications Warehouse

    Twomey, Erin R.; Signell, Richard P.

    2013-01-01

    A system-wide description of the seafloor topography is a basic requirement for most coastal oceanographic studies. The necessary detail of the topography obviously varies with application, but for many uses, a nominal resolution of roughly 100 m is sufficient. Creating a digital bathymetric grid with this level of resolution can be a complex procedure due to a multiplicity of data sources, data coverages, datums and interpolation procedures. This report documents the procedures used to construct a 3-arcsecond (approximately 90-meter grid cell size) digital elevation model for the Gulf of Maine (71°30' to 63° W, 39°30' to 46° N). We obtained elevation and bathymetric data from a variety of American and Canadian sources, converted all data to the North American Datum of 1983 for horizontal coordinates and the North American Vertical Datum of 1988 for vertical coordinates, used a combination of automatic and manual techniques for quality control, and interpolated gaps using a surface-fitting routine.

  13. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  14. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  15. A comparison of spatial interpolation methods for soil temperature over a complex topographical region

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin

    2016-08-01

    Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.

  16. Gaussian process regression to accelerate geometry optimizations relying on numerical differentiation

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Christiansen, Ove

    2018-06-01

    We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.

  17. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  18. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    NASA Astrophysics Data System (ADS)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  19. The Derivation Of A CO2 Fugacity Climatology From SOCAT's Global In SITU Data

    NASA Astrophysics Data System (ADS)

    Goddijn-Murphy, L. M.; Woolf, D. K.; Land, P. E.; Shutler, J. D.

    2013-12-01

    The Surface Ocean CO2 Atlas (SOCAT) has made millions of global underway sea surface measurements of CO2 publicly available, all in a uniform format and presented as fugacity, fCO2. However, these fCO2 values are valid strictly only for the instantaneous temperature at measurement and are not ideal for climatology. We recomputed these fCO2 values for the measurement month to be applicable to climatological sea surface temperatures, extrapolated to reference year 2010. The data were then spatially interpolated on a 1°×1° grid of the global oceans to produce 12 monthly fCO2 distributions. Our climatology data will be shared with the science community.

  20. Application of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.

    1990-01-01

    A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.

  1. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  2. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    NASA Astrophysics Data System (ADS)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  3. The US Navy Coupled Ocean-Wave Prediction System

    DTIC Science & Technology

    2014-09-01

    Stokes drift to be the dominant wave effect and that it increased surface drift speeds by 35% and veered the current in the direction of the wind...ocean model has been modified to incorporate the effect of the Stokes drift current, wave radiation stresses due to horizontal gradients of the momentum...for fourth-order differences for horizontal baroclinic pressure gradients and for interpolation of Coriolis terms. There is an option to use the

  4. Interpolation of the Radial Velocity Data from Coastal HF Radars

    DTIC Science & Technology

    2013-01-01

    practical applications and may help to solve many environmental problems caused by human activity. References [1] Alvera -Azcarate A., A. Barth, M. Rixen...surface temperature, Ocean Modelling, 9,325-346. [2] Alvera -Azcarate, A., A. Barth,. J.-M. Beckers, and R. H. Weisber, 2007: Multivari- ate...predictions from the global Navy Coastal Ocean Model (NCOM) dur- ing 1998-2001,7. Atmos. Oceanic TechnoL, 21(12), 1876-1894. [4] Barth, A., Alvera

  5. Wave Breaking Induced Surface Wakes and Jets Observed during a Bora Event

    DTIC Science & Technology

    2005-01-01

    terrain contours (interval = 200 m) superposed. The approximate NCAR Electra and NOAA P-3 flight tracks are indicated by bold and dotted straight lines ...Hz data. The red curves correspond to the COAMPS simulated fields obtained by interpolating the 1-km grid data to the straight line through the...Alpine Experiment (ALPEX) in 1982 [Smith, 1987]. These studies suggested that the bora flow shares some common characteristics with downslope windstorms

  6. 3-D Characterization of Seismic Properties at the Smart Weapons Test Range, YPG

    DTIC Science & Technology

    2001-10-01

    confidence limits around each interpolated value. Ground truth was accomplished through cross-hole seismic measurements and borehole logs. Surface wave... seismic method, as well as estimating the optimal orientation and spacing of the seismic array . A variety of sources and receivers was evaluated...location within the array is partially related to at least two seismic lines. Either through good fortune or foresight by the designers of the SWTR site

  7. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  8. Graphics and Flow Visualization of Computer Generated Flow Fields

    NASA Technical Reports Server (NTRS)

    Kathong, M.; Tiwari, S. N.

    1987-01-01

    Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.

  9. High-resolution daily gridded datasets of air temperature and wind speed for Europe

    NASA Astrophysics Data System (ADS)

    Brinckmann, S.; Krähenmann, S.; Bissolli, P.

    2015-08-01

    New high-resolution datasets for near surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are hourly SYNOP observations, partly supplemented by station data from the ECA&D dataset (http://www.ecad.eu). These data are quality tested to eliminate erroneous data and various kinds of inhomogeneities. Grids in a resolution of 0.044° (5 km) are derived by spatial interpolation of these station data into the CORDEX area. For temperature interpolation a modified version of a regression kriging method developed by Krähenmann et al. (2011) is used. At first, predictor fields of altitude, continentality and zonal mean temperature are chosen for a regression applied to monthly station data. The residuals of the monthly regression and the deviations of the daily data from the monthly averages are interpolated using simple kriging in a second and third step. For wind speed a new method based on the concept used for temperature was developed, involving predictor fields of exposure, roughness length, coastal distance and ERA Interim reanalysis wind speed at 850 hPa. Interpolation uncertainty is estimated by means of the kriging variance and regression uncertainties. Furthermore, to assess the quality of the final daily grid data, cross validation is performed. Explained variance ranges from 70 to 90 % for monthly temperature and from 50 to 60 % for monthly wind speed. The resulting RMSE for the final daily grid data amounts to 1-2 °C and 1-1.5 m s-1 (depending on season and parameter) for daily temperature parameters and daily mean wind speed, respectively. The datasets presented in this article are published at http://dx.doi.org/10.5676/DWD_CDC/DECREG0110v1.

  10. Application of wavefield compressive sensing in surface wave tomography

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Li, Qingyang; Huang, Jianping

    2018-06-01

    Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.

  11. Measuring implicit attitudes: A positive framing bias flaw in the Implicit Relational Assessment Procedure (IRAP).

    PubMed

    O'Shea, Brian; Watson, Derrick G; Brown, Gordon D A

    2016-02-01

    How can implicit attitudes best be measured? The Implicit Relational Assessment Procedure (IRAP), unlike the Implicit Association Test (IAT), claims to measure absolute, not just relative, implicit attitudes. In the IRAP, participants make congruent (Fat Person-Active: false; Fat Person-Unhealthy: true) or incongruent (Fat Person-Active: true; Fat Person-Unhealthy: false) responses in different blocks of trials. IRAP experiments have reported positive or neutral implicit attitudes (e.g., neutral attitudes toward fat people) in cases in which negative attitudes are normally found on explicit or other implicit measures. It was hypothesized that these results might reflect a positive framing bias (PFB) that occurs when participants complete the IRAP. Implicit attitudes toward categories with varying prior associations (nonwords, social systems, flowers and insects, thin and fat people) were measured. Three conditions (standard, positive framing, and negative framing) were used to measure whether framing influenced estimates of implicit attitudes. It was found that IRAP scores were influenced by how the task was framed to the participants, that the framing effect was modulated by the strength of prior stimulus associations, and that a default PFB led to an overestimation of positive implicit attitudes when measured by the IRAP. Overall, the findings question the validity of the IRAP as a tool for the measurement of absolute implicit attitudes. A new tool (Simple Implicit Procedure:SIP) for measuring absolute, not just relative, implicit attitudes is proposed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Applications of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.

    1990-01-01

    A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.

  13. Molecular dynamics simulations of β2-microglobulin interaction with hydrophobic surfaces.

    PubMed

    Dongmo Foumthuim, Cedrix J; Corazza, Alessandra; Esposito, Gennaro; Fogolari, Federico

    2017-11-21

    Hydrophobic surfaces are known to adsorb and unfold proteins, a process that has been studied only for a few proteins. Here we address the interaction of β2-microglobulin, a paradigmatic protein for the study of amyloidogenesis, with hydrophobic surfaces. A system with 27 copies of the protein surrounded by a model cubic hydrophobic box is studied by implicit solvent molecular dynamics simulations. Most proteins adsorb on the walls of the box without major distortions in local geometry, whereas free molecules maintain proper structures and fluctuations as observed in explicit solvent molecular dynamics simulations. The major conclusions from the simulations are as follows: (i) the adopted implicit solvent model is adequate to describe protein dynamics and thermodynamics; (ii) adsorption occurs readily and is irreversible on the simulated timescale; (iii) the regions most involved in molecular encounters and stable interactions with the walls are the same as those that are important in protein-protein and protein-nanoparticle interactions; (iv) unfolding following adsorption occurs at regions found to be flexible by both experiments and simulations; (v) thermodynamic analysis suggests a very large contribution from van der Waals interactions, whereas unfavorable electrostatic interactions are not found to contribute much to adsorption energy. Surfaces with different degrees of hydrophobicity may occur in vivo. Our simulations show that adsorption is a fast and irreversible process which is accompanied by partial unfolding. The results and the thermodynamic analysis presented here are consistent with and rationalize previous experimental work.

  14. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.

  15. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  16. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  17. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  18. The impact of conventional surface data upon VAS regression retrievals in the lower troposphere

    NASA Technical Reports Server (NTRS)

    Lee, T. H.; Chesters, D.; Mostek, A.

    1983-01-01

    Surface temperature and dewpoint reports are added to the infrared radiances from the VISSR Atmospheric Sounder (VAS) in order to improve the retrieval of temperature and moisture profiles in the lower troposphere. The conventional (airways) surface data are combined with the twelve VAS channels as additional predictors in a ridge regression retrieval scheme, with the aim of using all available data to make high resolution space-time interpolations of the radiosonde network. For one day of VAS observations, retrievals using only VAS radiances are compared with retrievals using VAS radiances plus surface data. Temperature retrieval accuracy evaluated at coincident radiosonde sites shows a significant impact within the boundary layer. Dewpoint retrieval accuracy shows a broader improvement within the lowest tropospheric layers. The most dramatic impact of surface data is observed in the improved relative spatial and temporal continuity of low-level fields retrieved over the Midwestern United States.

  19. Gifted Students' Implicit Beliefs about Intelligence and Giftedness

    ERIC Educational Resources Information Center

    Makel, Matthew C.; Snyder, Kate E.; Thomas, Chandler; Malone, Patrick S.; Putallaz, Martha

    2015-01-01

    Growing attention is being paid to individuals' implicit beliefs about the nature of intelligence. However, implicit beliefs about giftedness are currently underexamined. In the current study, we examined academically gifted adolescents' implicit beliefs about both intelligence and giftedness. Overall, participants' implicit beliefs about…

  20. Orientational Order on Surfaces: The Coupling of Topology, Geometry, and Dynamics

    NASA Astrophysics Data System (ADS)

    Nestler, M.; Nitschke, I.; Praetorius, S.; Voigt, A.

    2018-02-01

    We consider the numerical investigation of surface bound orientational order using unit tangential vector fields by means of a gradient flow equation of a weak surface Frank-Oseen energy. The energy is composed of intrinsic and extrinsic contributions, as well as a penalization term to enforce the unity of the vector field. Four different numerical discretizations, namely a discrete exterior calculus approach, a method based on vector spherical harmonics, a surface finite element method, and an approach utilizing an implicit surface description, the diffuse interface method, are described and compared with each other for surfaces with Euler characteristic 2. We demonstrate the influence of geometric properties on realizations of the Poincaré-Hopf theorem and show examples where the energy is decreased by introducing additional orientational defects.

Top