Sample records for unique hydrodynamical interpolation

  1. A general method for generating bathymetric data for hydrodynamic computer models

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  2. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  3. Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.

    2011-10-01

    We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of interpolated quantities in the solution of the Riemann Problem. The results of our tests demonstrate that GSPH is in fact a highly promising hydrodynamic scheme, also to be coupled to an N-body solver, for astrophysical and cosmological applications.

  4. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  5. Interactions of Waves and River Plume and their Effects on Sediment Transport at River Mouth (RIVET I)

    DTIC Science & Technology

    2013-09-30

    nearshore modeling system for inlet hydrodynamics, sediment deposition/resuspension, river plume processes and the resulting morphodynamics in a...modeling systems are sufficiently robust to provide the critical link (interpolation) between the remote-sensing data and the ground-truth data. The...modeling systems . For example, it is well-known that in numerical modeling of inlet hydrodynamics, the results are sensitive to parameterization of

  6. Unified description of Bjorken and Landau 1+1 hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Janik, R. A.; Peschanski, R.

    2007-11-01

    We propose a generalization of the Bjorken in-out Ansatz for fluid trajectories, which, when applied to the 1+1 hydrodynamic equations, generates a one-parameter family of analytic solutions interpolating between the boost-invariant Bjorken picture and the non-boost-invariant one by Landau. This parameter characterizes the proper-time scale when the fluid velocities approach the in-out Ansatz. We discuss the resulting rapidity distribution of entropy for various freeze-out conditions and compare it with the original Bjorken and Landau results.

  7. Hermite-Birkhoff interpolation in the nth roots of unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.

    1980-06-01

    Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.

  8. Coupling hydrodynamic and wave propagation modeling for waveform modeling of SPE.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Steedman, D. W.; Rougier, E.; Delorey, A.; Bradley, C. R.

    2015-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. This paper presents effort to improve knowledge of the processes that affect seismic wave propagation from the hydrodynamic/plastic source region to the elastic/anelastic far field thanks to numerical modeling. The challenge is to couple the prompt processes that take place in the near source region to the ones taking place later in time due to wave propagation in complex 3D geologic environments. In this paper, we report on results of first-principles simulations coupling hydrodynamic simulation codes (Abaqus and CASH), with a 3D full waveform propagation code, SPECFEM3D. Abaqus and CASH model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. LANL has been recently employing a Coupled Euler-Lagrange (CEL) modeling capability. This has allowed the testing of a new phenomenological model for modeling stored shear energy in jointed material. This unique modeling capability has enabled highfidelity modeling of the explosive, the weak grout-filled borehole, as well as the surrounding jointed rock. SPECFEM3D is based on the Spectral Element Method, a direct numerical method for full waveform modeling with mathematical accuracy (e.g. Komatitsch, 1998, 2002) thanks to its use of the weak formulation of the wave equation and of high-order polynomial functions. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. Displacement time series at these points are computed from output of CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests and waveforms modeled for several SPE tests conducted so far, with a special focus on effect of the local topography.

  9. The moving-least-squares-particle hydrodynamics method (MLSPH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less

  10. Interpolation of hard and soft dilepton rates

    NASA Astrophysics Data System (ADS)

    Ghisoiu, I.; Laine, M.

    2014-10-01

    Strict next-to-leading order (NLO) results for the dilepton production rate from a QCD plasma at temperatures above a few hundred MeV suffer from a breakdown of the loop expansion in the regime of soft invariant masses M 2 ≪ ( πT)2. In this regime an LPM resummation is needed for obtaining the correct leading-order result. We show how to construct an interpolation between the hard NLO and the leading-order LPM expression, which is theoretically consistent in both regimes and free from double counting. The final numerical results are presented in a tabulated form, suitable for insertion into hydrodynamical codes.

  11. Assessment of groundwater potential of the crystalline basement of Wadi-Fira (Eastern Chad) using a multi-criteria correlation analysis and Remote Sensing data

    NASA Astrophysics Data System (ADS)

    Brahim Mahamat, Hamza; Coz Mathieu, Le; Abderamane, Hamit; Razack, Moumtaz

    2017-04-01

    Access to water in the Wadi-Fira aquifer system is a crucial problem in Eastern Chad because of (i) the complexity of the hydrogeological context (fractured basement), (ii) large extent of the study area (50,000 km2); And (iii) hard-to-access field data (only 34 water points were available to determine physicochemical and hydrodynamic parameters) often associated with high uncertainty. This groundwater resource is paramount in this arid environment, to meet the water needs of an increasingly growing population (refugees from Darfur) with a predominant pastoral activity. In order to optimally exploit the available data, correlative analyzes are carried out by integrating the spatial dimension of the data with GIS tools. A three-step strategy is thus implemented, based on: (i) point field data with physicochemical and hydrodynamic parameters; (ii) maps interpolated from point data, to increase the number of ''comparable'' parameters for each site; and (iii) interpolated maps coupled to maps from Remote Sensing results describing the area's structural geomorphology (slopes, hydrographic network, faults). The first results show marked correlations between physico-chemical and hydrodynamical parameters. According to the correlation matrix, the static level correlates significantly with the dominant cations (Ca2+ ; R = 0.52) and anions (HCO3- ; R = 0.53). Correlations are lower between electrical conductivity and transmissivity, and electrical conductivity and measured static level. A negative correlation is observed between Fluorine and transmissivity (r = -0.65), and the altered horizon (r = -0.5). The most significant discharges are obtained in fissured horizons. The correlative analysis allowsto differentiate mapped sectors according to the productivity and chemical quality regarding groundwater resource. Keywords: Hydrodynamics, Hydrochemistry, Remote Sensing, SRTM, Basement aquifer, Alteration, Lineaments, Wadi-Fira, Tchad.

  12. Spatial interpolation of river channel topography using the shortest temporal distance

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Xian, Cuiling; Chen, Huajin; Grieneisen, Michael L.; Liu, Jiaming; Zhang, Minghua

    2016-11-01

    It is difficult to interpolate river channel topography due to complex anisotropy. As the anisotropy is often caused by river flow, especially the hydrodynamic and transport mechanisms, it is reasonable to incorporate flow velocity into topography interpolator for decreasing the effect of anisotropy. In this study, two new distance metrics defined as the time taken by water flow to travel between two locations are developed, and replace the spatial distance metric or Euclidean distance that is currently used to interpolate topography. One is a shortest temporal distance (STD) metric. The temporal distance (TD) of a path between two nodes is calculated by spatial distance divided by the tangent component of flow velocity along the path, and the STD is searched using the Dijkstra algorithm in all possible paths between two nodes. The other is a modified shortest temporal distance (MSTD) metric in which both the tangent and normal components of flow velocity were combined. They are used to construct the methods for the interpolation of river channel topography. The proposed methods are used to generate the topography of Wuhan Section of Changjiang River and compared with Universal Kriging (UK) and Inverse Distance Weighting (IDW). The results clearly showed that the STD and MSTD based on flow velocity were reliable spatial interpolators. The MSTD, followed by the STD, presents improvement in prediction accuracy relative to both UK and IDW.

  13. Hydrodynamics of the Polyakov line in SU(N c) Yang-Mills

    DOE PAGES

    Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail

    2015-12-08

    We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite N c for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of N c, and are consistent with the string model results at N c = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out ofmore » equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z(N c)bubble using a piece-wise sound wave is suggested.« less

  14. Numerical Simulation of Hydrodynamics of a Heavy Liquid Drop Covered by Vapor Film in a Water Pool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, W.M.; Yang, Z.L.; Giri, A.

    2002-07-01

    A numerical study on the hydrodynamics of a droplet covered by vapor film in water pool is carried out. Two level set functions are used as to implicitly capture the interfaces among three immiscible fluids (melt-drop, vapor and coolant). This approach leaves only one set of conservation equations for the three phases. A high-order Navier-Stokes solver, called Cubic-Interpolated Pseudo-Particle (CIP) algorithm, is employed in combination with level set approach, which allows large density ratios (up to 1000), surface tension and jump in viscosity. By this calculation, the hydrodynamic behavior of a melt droplet falling into a volatile coolant is simulated,more » which is of great significance to reveal the mechanism of steam explosion during a hypothetical severe reactor accident. (authors)« less

  15. Hydrodynamics of the Dirac fluid in graphene

    NASA Astrophysics Data System (ADS)

    Lucas, Andrew

    Recent advances in materials physics have allowed us to observe hydrodynamic electron flow in multiple materials. A uniquely interesting possibility is the emergence of a quasi-relativistic plasma of electrons and holes appearing in Dirac semimetals such as graphene. I will briefly review the unique features of the hydrodynamics of the Dirac fluid, and then discuss the theroetical signatures for the Dirac fluid, and its observation in experiment.

  16. Resurgence and hydrodynamic attractors in Gauss-Bonnet holography

    NASA Astrophysics Data System (ADS)

    Casalderrey-Solana, Jorge; Gushterov, Nikola I.; Meiring, Ben

    2018-04-01

    We study the convergence of the hydrodynamic series in the gravity dual of Gauss-Bonnet gravity in five dimensions with negative cosmological constant via holography. By imposing boost invariance symmetry, we find a solution to the Gauss-Bonnet equation of motion in inverse powers of the proper time, from which we can extract high order corrections to Bjorken flow for different values of the Gauss-Bonnet parameter λGB. As in all other known examples the gradient expansion is, at most, an asymptotic series which can be understood through applying the techniques of Borel-Padé summation. As expected from the behaviour of the quasi-normal modes in the theory, we observe that the singularities in the Borel plane of this series show qualitative features that interpolate between the infinitely strong coupling limit of N=4 Super Yang Mills theory and the expectation from kinetic theory. We further perform the Borel resummation to constrain the behaviour of hydrodynamic attractors beyond leading order in the hydrodynamic expansion. We find that for all values of λGB considered, the convergence of different initial conditions to the resummation and its hydrodynamization occur at large and comparable values of the pressure anisotropy.

  17. Well-posedness and decay for the dissipative system modeling electro-hydrodynamics in negative Besov spaces

    NASA Astrophysics Data System (ADS)

    Zhao, Jihong; Liu, Qiao

    2017-07-01

    In Guo and Wang (2012) [10], Y. Guo and Y. Wang developed a general new energy method for proving the optimal time decay rates of the solutions to dissipative equations. In this paper, we generalize this method in the framework of homogeneous Besov spaces. Moreover, we apply this method to a model arising from electro-hydrodynamics, which is a strongly coupled system of the Navier-Stokes equations and the Poisson-Nernst-Planck equations through charge transport and external forcing terms. We show that some weighted negative Besov norms of solutions are preserved along time evolution, and obtain the optimal time decay rates of the higher-order spatial derivatives of solutions by the Fourier splitting approach and the interpolation techniques.

  18. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  19. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  20. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  1. Spatially continuous interpolation of water stage and water depths using the Everglades depth estimation network (EDEN)

    USGS Publications Warehouse

    Pearlstine, Leonard; Higer, Aaron; Palaseanu, Monica; Fujisaki, Ikuko; Mazzotti, Frank

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on a 400-square-meter grid spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades.

  2. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level. © 2011 IEEE

  3. Quantum realization of the nearest neighbor value interpolation method for INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping

    2018-07-01

    This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.

  4. Microscale hydrodynamics near moving contact lines

    NASA Technical Reports Server (NTRS)

    Garoff, Stephen; Chen, Q.; Rame, Enrique; Willson, K. R.

    1994-01-01

    The hydrodynamics governing the fluid motions on a microscopic scale near moving contact lines are different from those governing motion far from the contact line. We explore these unique hydrodynamics by detailed measurement of the shape of a fluid meniscus very close to a moving contact line. The validity of present models of the hydrodynamics near moving contact lines as well as the dynamic wetting characteristics of a family of polymer liquids are discussed.

  5. Calibration of the 2D Hydrodynamic Model Floodos and Implications of Distributed Friction on Sediment Transport Capacity

    NASA Astrophysics Data System (ADS)

    Croissant, T.; Lague, D.; Davy, P.

    2014-12-01

    Numerical models of floodplain dynamics often use a simplified 1D description of flow hydraulics and sediment transport that cannot fully account for differential friction between vegetated banks and low friction in the main channel. Key parameters of such models are the friction coefficient and the description of the channel bathymetry which strongly influence predicted water depth and velocity, and therefore sediment transport capacity. In this study, we use a newly developed 2D hydrodynamic model, Floodos, whose efficiency is a major advantage for exploring channel morphodynamics from a flood event to millennial time scales. We evaluate the quality of Floodos predictions in the Whataroa river, New Zealand and assess the effect of a spatially distributed friction coefficient (SDFC) on long term sediment transport. Predictions from the model are compared to water depth data from a gauging station located on the Whataroa River in Southern Alps, New Zealand. The Digital Elevation Model (DEM) of the 2.5 km long studied reach is derived from a 2010 LiDAR acquisition with 2 m resolution and an interpolated bathymetry. The several large floods experienced by this river during 2010 allow us to access water depth for a wide range of possible river discharges and to retrieve the scaling between these two parameters. The high resolution DEM used has a non-negligible part of submerged bathymetry that airborne LiDAR was not able to capture. Bathymetry can be reconstructed by interpolation methods that introduce several uncertainties concerning water depth predictions. We address these uncertainties inherent to the interpolation using a simplified channel with a geometry (slope and width) similar to the Whataroa river. We then explore the effect of a SDFC on velocity pattern, water depth and sediment transport capacity and discuss its relevance on long term predictions of sediment transport and channel morphodynamics.

  6. Automatic feature-based grouping during multiple object tracking.

    PubMed

    Erlikhman, Gennady; Keane, Brian P; Mettler, Everett; Horowitz, Todd S; Kellman, Philip J

    2013-12-01

    Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation, and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. We found that intertarget grouping improved performance for all feature types except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were, at times, large (>15% decrement in accuracy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2), and relative to a "diversity" condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking.

  7. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.

  8. Considerations Relating to Type 1 and Type 3 Non-uniqueness in SPRT Interpolations of the ITS-90

    NASA Astrophysics Data System (ADS)

    Rusby, R. L.; Pearce, J. V.; Elliott, C. J.

    2017-12-01

    It is well known that different allowed interpolations using a given standard platinum resistance thermometer (SPRT) in overlapping subranges of the ITS-90 do not lead to identical results. This is termed Type 1 non-uniqueness, or subrange inconsistency (SRI), and it arises because of small incompatibilities in the SPRT characteristic W(T_{90}) with respect to the ITS-90 reference function Wr(T_{90}), such that the alternative low-order interpolations, fitted to the deviations W(T_{90}) - Wr(T_{90}) at different sets of fixed points, are not in general identical. To some extent SRI may be `scale-intrinsic,' i.e., caused by incompatibilities between the resistance ratios, Wr(T_{90}), specified at the fixed points of the ITS-90, and hence the same for all SPRTs. However, it has been found that the SRI varies strongly between different SPRTs, and that variability of W(T_{90}) is much the dominant cause. This raises the question of how SRI is linked to Type 3 non-uniqueness between SPRTs in each separate subrange, which is entirely due to differences in SPRT characteristics. This paper explores the connection between them and concludes that they are of similar magnitude and consequently, being different manifestations of the same effects, it is argued that non-uniqueness should be covered by a single component of uncertainty. Following the stated rationale of the ITS-90, it is further suggested that this uncertainty should be estimated only within each subrange, i.e., that shorter subranges should not be deemed subject to potential effects caused by out-of-range data.

  9. Accurate Energy Transaction Allocation using Path Integration and Interpolation

    NASA Astrophysics Data System (ADS)

    Bhide, Mandar Mohan

    This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.

  10. Estimation of water surface elevations for the Everglades, Florida

    USGS Publications Warehouse

    Palaseanu, Monica; Pearlstine, Leonard

    2008-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring gages and modeling methods that provides scientists and managers with current (2000–present) online water surface and water depth information for the freshwater domain of the Greater Everglades. This integrated system presents data on a 400-m square grid to assist in (1) large-scale field operations; (2) integration of hydrologic and ecologic responses; (3) supporting biological and ecological assessment of the implementation of the Comprehensive Everglades Restoration Plan (CERP); and (4) assessing trophic-level responses to hydrodynamic changes in the Everglades.This paper investigates the radial basis function multiquadric method of interpolation to obtain a continuous freshwater surface across the entire Everglades using radio-transmitted data from a network of water-level gages managed by the US Geological Survey (USGS), the South Florida Water Management District (SFWMD), and the Everglades National Park (ENP). Since the hydrological connection is interrupted by canals and levees across the study area, boundary conditions were simulated by linearly interpolating along those features and integrating the results together with the data from marsh stations to obtain a continuous water surface through multiquadric interpolation. The absolute cross-validation errors greater than 5 cm correlate well with the local outliers and the minimum distance between the closest stations within 2000-m radius, but seem to be independent of vegetation or season.

  11. Hydrodynamic predictions for 5.44 TeV Xe+Xe collisions

    NASA Astrophysics Data System (ADS)

    Giacalone, Giuliano; Noronha-Hostler, Jacquelyn; Luzum, Matthew; Ollitrault, Jean-Yves

    2018-03-01

    We argue that relativistic hydrodynamics is able to make robust predictions for soft particle production in Xe+Xe collisions at the CERN Large Hadron Collider (LHC). The change of system size from Pb+Pb to Xe+Xe provides a unique opportunity to test the scaling laws inherent to fluid dynamics. Using event-by-event hydrodynamic simulations, we make quantitative predictions for several observables: mean transverse momentum, anisotropic flow coefficients, and their fluctuations. Results are shown as a function of collision centrality.

  12. Theory of the lattice Boltzmann Method: Dispersion, Dissipation, Isotropy, Galilean Invariance, and Stability

    NASA Technical Reports Server (NTRS)

    Lallemand, Pierre; Luo, Li-Shi

    2000-01-01

    The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.

  13. A general tool for the evaluation of spiral CT interpolation algorithms: revisiting the effect of pitch in multislice CT.

    PubMed

    Bricault, Ivan; Ferretti, Gilbert

    2005-01-01

    While multislice spiral computed tomography (CT) scanners are provided by all major manufacturers, their specific interpolation algorithms have been rarely evaluated. Because the results published so far relate to distinct particular cases and differ significantly, there are contradictory recommendations about the choice of pitch in clinical practice. In this paper, we present a new tool for the evaluation of multislice spiral CT z-interpolation algorithms, and apply it to the four-slice case. Our software is based on the computation of a "Weighted Radiation Profile" (WRP), and compares WRP to an expected ideal profile in terms of widening and heterogeneity. It provides a unique scheme for analyzing a large variety of spiral CT acquisition procedures. Freely chosen parameters include: number of detector rows, detector collimation, nominal slice width, helical pitch, and interpolation algorithm with any filter shape and width. Moreover, it is possible to study any longitudinal and off-isocenter positions. Theoretical and experimental results show that WRP, more than Slice Sensitivity Profile (SSP), provides a comprehensive characterization of interpolation algorithms. WRP analysis demonstrates that commonly "preferred helical pitches" are actually nonoptimal regarding the formerly distinguished z-sampling gap reduction criterion. It is also shown that "narrow filter" interpolation algorithms do not enable a general preferred pitch discussion, since they present poor properties with large longitudinal and off-center variations. In the more stable case of "wide filter" interpolation algorithms, SSP width or WRP widening are shown to be almost constant. Therefore, optimal properties should no longer be sought in terms of these criteria. On the contrary, WRP heterogeneity is related to variable artifact phenomena and can pertinently characterize optimal pitches. In particular, the exemplary interpolation properties of pitch = 1 "wide filter" mode are demonstrated.

  14. SPHYNX: an accurate density-based SPH method for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Cabezón, R. M.; García-Senz, D.; Figueira, J.

    2017-10-01

    Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.

  15. Impact of hydrodynamics on effective interactions in suspensions of active and passive matter.

    PubMed

    Krafnick, Ryan C; García, Angel E

    2015-02-01

    Passive particles exhibit unique properties when immersed in an active bath of self-propelling entities. In particular, an effective attraction can appear between particles that repel each other when in a passive solution. Here we numerically study the effect of hydrodynamics on an active-passive hybrid system, where we observe qualitative differences as compared to simulations with excluded volume effects alone. The results shed light on an existing discrepancy in pair lifetimes between simulation and experiment, due to the hydrodynamically enhanced stability of coupled passive particles.

  16. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  17. DRACO development for 3D simulations

    NASA Astrophysics Data System (ADS)

    Fatenejad, Milad; Moses, Gregory

    2006-10-01

    The DRACO (r-z) lagrangian radiation-hydrodynamics laser fusion simulation code is being extended to model 3D hydrodynamics in (x-y-z) coordinates with hexahedral cells on a structured grid. The equation of motion is solved with a lagrangian update with optional rezoning. The fluid equations are solved using an explicit scheme based on (Schulz, 1964) while the SALE-3D algorithm (Amsden, 1981) is used as a template for computing cell volumes and other quantities. A second order rezoner has been added which uses linear interpolation of the underlying continuous functions to preserve accuracy (Van Leer, 1976). Artificial restoring force terms and smoothing algorithms are used to avoid grid distortion in high aspect ratio cells. These include alternate node couplers along with a rotational restoring force based on the Tensor Code (Maenchen, 1964). Electron and ion thermal conduction is modeled using an extension of Kershaw's method (Kershaw, 1981) to 3D geometry. Test problem simulations will be presented to demonstrate the applicability of this new version of DRACO to the study of fluid instabilities in three dimensions.

  18. Swimming of a linear chain with a cargo in an incompressible viscous fluid with inertia

    NASA Astrophysics Data System (ADS)

    Felderhof, B. U.

    2017-01-01

    An approximation to the added mass matrix of an assembly of spheres is constructed on the basis of potential flow theory for situations where one sphere is much larger than the others. In the approximation, the flow potential near a small sphere is assumed to be dipolar, but near the large sphere it involves all higher order multipoles. The analysis is based on an exact result for the potential of a magnetic dipole in the presence of a superconducting sphere. Subsequently, the approximate added mass hydrodynamic interactions are used in a calculation of the swimming velocity and rate of dissipation of linear chain structures consisting of a number of small spheres and a single large one, with account also of frictional hydrodynamic interactions. The results derived for periodic swimming on the basis of a kinematic approach are compared with the bilinear theory, valid for small amplitude of stroke, and with the numerical solution of the approximate equations of motion. The calculations interpolate over the whole range of scale number between the friction-dominated Stokes limit and the inertia-dominated regime.

  19. Tri-linear interpolation-based cerebral white matter fiber imaging

    PubMed Central

    Jiang, Shan; Zhang, Pengfei; Han, Tong; Liu, Weihua; Liu, Meixia

    2013-01-01

    Diffusion tensor imaging is a unique method to visualize white matter fibers three-dimensionally, non-invasively and in vivo, and therefore it is an important tool for observing and researching neural regeneration. Different diffusion tensor imaging-based fiber tracking methods have been already investigated, but making the computing faster, fiber tracking longer and smoother and the details shown clearer are needed to be improved for clinical applications. This study proposed a new fiber tracking strategy based on tri-linear interpolation. We selected a patient with acute infarction of the right basal ganglia and designed experiments based on either the tri-linear interpolation algorithm or tensorline algorithm. Fiber tracking in the same regions of interest (genu of the corpus callosum) was performed separately. The validity of the tri-linear interpolation algorithm was verified by quantitative analysis, and its feasibility in clinical diagnosis was confirmed by the contrast between tracking results and the disease condition of the patient as well as the actual brain anatomy. Statistical results showed that the maximum length and average length of the white matter fibers tracked by the tri-linear interpolation algorithm were significantly longer. The tracking images of the fibers indicated that this method can obtain smoother tracked fibers, more obvious orientation and clearer details. Tracking fiber abnormalities are in good agreement with the actual condition of patients, and tracking displayed fibers that passed though the corpus callosum, which was consistent with the anatomical structures of the brain. Therefore, the tri-linear interpolation algorithm can achieve a clear, anatomically correct and reliable tracking result. PMID:25206524

  20. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  1. Polymeric microchip for the simultaneous determination of anions and cations by hydrodynamic injection using a dual-channel sequential injection microchip electrophoresis system.

    PubMed

    Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C

    2014-04-01

    A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.

  2. Calculation and interpolation of the characteristics of the hydrodynamic journal bearings in the domain of possible movements of the rotor journals

    NASA Astrophysics Data System (ADS)

    Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.

    2016-10-01

    To visualize the physical processes that occur in the journal bearings of the shafting of power generating turbosets, a technique for preliminary calculation of a set of characteristics of the journal bearings in the domain of possible movements (DPM) of the rotor journals is proposed. The technique is based on interpolation of the oil film characteristics and is designed for use in real-time diagnostic system COMPACS®. According to this technique, for each journal bearing, the domain of possible movement of the shaft journal is computed, then triangulation of the area is performed, and the corresponding mesh is constructed. At each node of the mesh, all characteristics of the journal bearing required by the diagnostic system are calculated. Via shaft-position sensors, the system measures—in the online mode—the instantaneous location of the shaft journal in the bearing and determines the averaged static position of the journals (the pivoting vector). Afterwards, continuous interpolation in the triangulation domain is performed, which allows the real-time calculation of the static and dynamic forces that act on the rotor journal, the flow rate and the temperature of the lubricant, and power friction losses. Use of the proposed method on a running turboset enables diagnosing the technical condition of the shafting support system and promptly identifying the defects that determine the vibrational state and the overall reliability of the turboset. The authors report a number of examples of constructing the DPM and computing the basic static characteristics for elliptical journal bearings typical of large-scale power turbosets. To illustrate the interpolation method, the traditional approach to calculation of bearing properties is applied. This approach is based on a Reynolds two-dimensional isothermal equation that accounts for the mobility of the boundary of the oil film continuity.

  3. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  4. Equilibrium Spline Interface (ESI) for magnetic confinement codes

    NASA Astrophysics Data System (ADS)

    Li, Xujing; Zakharov, Leonid E.

    2017-12-01

    A compact and comprehensive interface between magneto-hydrodynamic (MHD) equilibrium codes and gyro-kinetic, particle orbit, MHD stability, and transport codes is presented. Its irreducible set of equilibrium data consists of three (in the 2-D case with occasionally one extra in the 3-D case) functions of coordinates and four 1-D radial profiles together with their first and mixed derivatives. The C reconstruction routines, accessible also from FORTRAN, allow the calculation of basis functions and their first derivatives at any position inside the plasma and in its vicinity. After this all vector fields and geometric coefficients, required for the above mentioned types of codes, can be calculated using only algebraic operations with no further interpolation or differentiation.

  5. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  6. A Southern Ocean variability study using the Argo-based Model for Investigation of the Global Ocean (AMIGO)

    NASA Astrophysics Data System (ADS)

    Lebedev, Konstantin

    2017-04-01

    The era of satellite observations of the ocean surface that started at the end of the 20th century and the development of the Argo project in the first years of the 21st century, designed to collect information of the upper 2000 m of the ocean using satellites, provides unique opportunities for continuous monitoring of the Global Ocean state. Starting from 2005, measurements with the Argo floats have been performed over the majority of the World Ocean. In November 2007, the Argo program reached coverage of 3000 simultaneously operating floats (one float in a three-degree square) planned during the development of the program. Currently, 4000 Argo floats autonomously profile the upper 2000-m water column of the ocean from Antarctica to Spitsbergen increasing World Ocean temperature and salinity databases by 12000 profiles per month. This makes it possible to solve problems on reconstructing and monitoring the ocean state on an almost real-time basis, study the ocean dynamics, obtain reasonable estimates of the climatic state of the ocean in the last decade and estimate existing intraclimatic trends. We present the newly developed Argo-Based Model for Investigation of the Global Ocean (AMIGO), which consists of a block for variational interpolation of the profiles of drifting Argo floats to a regular grid and a block for model hydrodynamic adjustment of variationally interpolated fields. Such a method makes it possible to obtain a full set of oceanographic characteristics - temperature, salinity, density, and current velocity - using irregularly located Argo measurements (the principle of the variational interpolation technique entails minimization of the misfit between the interpolated fields defined on the regular grid and irregularly distributed data; hence the optimal solution passes as close to the data as possible). The simulations were performed for the entire globe limited in the north by 85.5° N using 1° grid spacing in both longitude and latitude. At the depths exceeding 2000 m, in which Argo data are lacking, the temperature and salinity data were taken from the WOA-09 database. The constant temperature and salinity values from the Argo data for the corresponding month (year, season) derived using the variational technique described above were specified as the boundary conditions at the ocean surface. The constant wind stress in the corresponding month (year, season) was specified from the ECMWF ERA-Interim reanalysis data. The mass, salt, and heat transports over several regions of the Antarctic Circumpolar Current (ACC) and at its northern boundary (35° S) were calculated, seasonal and intra-decadal variation of the transports was studied. The calculations cover the 12-year period from 2005 to 2016. The AMIGO database enjoys free public access on the Internet at: http://argo.ocean.ru/. The results are represented as monthly, seasonal, and annual data and climatological mean fields. The spatial resolution of the data is one degree in latitude and longitude, and the temporal resolution is one month. The work was supported by the Russian Science Foundation (project 16-17-10149).

  7. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  8. Computation and analysis of the transverse current autocorrelation function, Ct(k,t), for small wave vectors: A molecular-dynamics study for a Lennard-Jones fluid

    NASA Astrophysics Data System (ADS)

    Vogelsang, R.; Hoheisel, C.

    1987-02-01

    Molecular-dynamics (MD) calculations are reported for three thermodynamic states of a Lennard-Jones fluid. Systems of 2048 particles and 105 integration steps were used. The transverse current autocorrelation function, Ct(k,t), has been determined for wave vectors of the range 0.5<||k||σ<1.5. Ct(k,t) was fitted by hydrodynamic-type functions. The fits returned k-dependent decay times and shear viscosities which showed a systematic behavior as a function of k. Extrapolation to the hydrodynamic region at k=0 gave shear viscosity coefficients in good agreement with direct Green-Kubo results obtained in previous work. The two-exponential model fit for the memory function proposed by other authors does not provide a reasonable description of the MD results, as the fit parameters show no systematic wave-vector dependence, although the Ct(k,t) functions are somewhat better fitted. Similarly, the semiempirical interpolation formula for the decay time based on the viscoelastic concept proposed by Akcasu and Daniels fails to reproduce the correct k dependence for the wavelength range investigated herein.

  9. Assimilation of river altimetry data for effective bed elevation and roughness coefficient

    NASA Astrophysics Data System (ADS)

    Brêda, João Paulo L. F.; Paiva, Rodrigo C. D.; Bravo, Juan Martin; Passaia, Otávio

    2017-04-01

    Hydrodynamic models of large rivers are important prediction tools of river discharge, height and floods. However, these techniques still carry considerable errors; part of them related to parameters uncertainties related to river bathymetry and roughness coefficient. Data from recent spatial altimetry missions offers an opportunity to reduce parameters uncertainty through inverse methods. This study aims to develop and access different methods of altimetry data assimilation to improve river bottom levels and Manning roughness estimations in a 1-D hydrodynamic model. The case study was a 1,100 km reach of the Madeira River, a tributary of the Amazon. The tested assimilation methods are direct insertion, linear interpolation, SCE-UA global optimization algorithm and a Kalman Filter adaptation. The Kalman Filter method is composed by new physically based covariance functions developed from steady-flow and backwater equations. It is accessed the benefits of altimetry missions with different spatio-temporal resolutions, such as ICESAT-1, Envisat and Jason 2. Level time series of 5 gauging stations and 5 GPS river height profiles are used to assess and validate the assimilation methods. Finally, the potential of future missions are discussed, such as ICESAT-2 and SWOT satellites.

  10. Quantifying Thin Mat Floating Marsh Strength and Interaction with Hydrodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Collins, J. H., III; Sasser, C.; Willson, C. S.

    2016-12-01

    Louisiana possesses over 350,000 acres of unique floating vegetated systems known as floating marshes or flotants. Floating marshes make up 70% of the Terrebonne and Barataria basin wetlands and exist in several forms, mainly thick mat or thin mat. Salt-water intrusion, nutria grazing, and high-energy wave events are believed to be some contributing factors to the degradation of floating marshes; however, there has been little investigation into the hydrodynamic effects on their structural integrity. Due to their unique nature, floating marshes could be susceptible to changes in the hydrodynamic environment that may result from proposed river freshwater and sediment diversion projects introducing flow to areas that are typically somewhat isolated. This study aims to improve the understanding of how thin mat floating marshes respond to increased hydrodynamic stresses and, more specifically, how higher water velocities might increase the washout probability of this vegetation type. There are two major components of this research: 1) A thorough measurement of the material properties of the vegetative mats as a root-soil matrix composite material; and 2) An accurate numerical simulation of the hydrodynamics and forces imposed on the floating marsh mats by the flow. To achieve these goals, laboratory and field experiments were conducted using a customized device to measure the bulk properties of typical floating marshes. Additionally, Delft-3D FLOW and ANSYS FLUENT were used to simulate the flow around a series of simplified mat structures in order to estimate the hydrodynamic forcings on the mats. The hydrodynamic forcings are coupled with a material analysis, allowing for a thorough analysis of their interaction under various conditions. The 2-way Fluid Structure Interaction (F.S.I.) between the flow and the mat is achieved by coupling a Finite Element Analysis (F.E.A.) solver in ANSYS with FLUENT. The flow conditions necessary for the structural failure of the floating marshes are determined for a multitude of mat shapes and sizes, leading to a quantifiable critical velocity required for washout. Ultimately, through dimensional analysis, an equation for washout potential will be developed from the results, which could be used as a design guideline.

  11. Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.

    2016-12-01

    Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.

  12. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.

  13. Toroidal plasmoid generation via extreme hydrodynamic shear

    PubMed Central

    Gharib, Morteza; Mendoza, Sean; Rosenfeld, Moshe; Beizai, Masoud

    2017-01-01

    Saint Elmo’s fire and lightning are two known forms of naturally occurring atmospheric pressure plasmas. As a technology, nonthermal plasmas are induced from artificially created electromagnetic or electrostatic fields. Here we report the observation of arguably a unique case of a naturally formed such plasma, created in air at room temperature without external electromagnetic action, by impinging a high-speed microjet of deionized water on a dielectric solid surface. We demonstrate that tribo-electrification from extreme and focused hydrodynamic shear is the driving mechanism for the generation of energetic free electrons. Air ionization results in a plasma that, unlike the general family, is topologically well defined in the form of a coherent toroidal structure. Possibly confined through its self-induced electromagnetic field, this plasmoid is shown to emit strong luminescence and discrete-frequency radio waves. Our experimental study suggests the discovery of a unique platform to support experimentation in low-temperature plasma science. PMID:29146825

  14. A 3D Optimal Interpolation Assimilation Scheme of HF Radar Current Data into a Numerical Ocean Model

    NASA Astrophysics Data System (ADS)

    Ragnoli, Emanuele; Zhuk, Sergiy; Donncha, Fearghal O.; Suits, Frank; Hartnett, Michael

    2013-04-01

    In this work a technique for the 3D assimilation of ocean surface current measurements into a numerical ocean model based on data from High Frequency Radar (HFR) systems is presented. The technique is the combination of supplementary forcing on the surface and of and Ekman layer projection of the correction in the depth. Optimal interpolation through BLUE (Best Linear Unbiased Estimator) of the model predicted velocity and HFR observations is computed in order to derive a supplementary forcing applied at the surface boundary. In the depth the assimilation is propagated using an additional Ekman pumping (vertical velocity) based on the correction achieved by BLUE. In this work a HFR data assimilation system for hydrodynamic modelling of Galway Bay in Ireland is developed; it demonstrates the viability of adopting data assimilation techniques to improve the performance of numerical models in regions characterized by significant wind-driven flows. A network of CODAR Seasonde high frequency radars (HFR) deployed within Galway Bay, on the West Coast of Ireland, provides flow measurements adopted for this study. This system provides real-time synoptic measurements of both ocean surface currents and ocean surface waves in regions of the bay where radials from two or more radars intersect. Radar systems have a number of unique advantages in ocean modelling data assimilation schemes, namely, the ability to provide two-dimensional mapping of surface currents at resolutions that capture the complex structure related to coastal topography and the intrinsic instability scales of coastal circulation at a relatively low-cost. The radar system used in this study operates at a frequency of 25MHz which provides a sampling range of 25km at a spatial resolution of 300m.A detailed dataset of HFR observed velocities is collected at 60 minute intervals for a period chosen for comparison due to frequent occurrences of highly-energetic, storm-force events. In conjunction with this, a comprehensive weather station, tide gauge and river monitoring program is conducted. The data are then used to maintain density fields within the model and to force the wind direction and magnitude on flows. The Data Assimilation scheme is then assessed and validated via HFR surface flow measurements.

  15. Integration of remote sensing technique and hydrologic model for monitoring tidal flat dynamics of Juiduansha in Shanghai

    NASA Astrophysics Data System (ADS)

    Zheng, Zongsheng; Zhou, Yunxuan; Jiang, Xuezhong

    2007-06-01

    Ground survey is restricted by the difficulty of access to wide-range and dynamic salt marsh. Waterline method and hydrodynamic model were investigated to construct Digital Elevation Model (DEM) at Jiudunasha Shoals. A series of waterlines were extracted from multi-temporal remotely sensing images collected over the period of 2000-2004. The assignment of an elevation to each waterline at the satellite overpass was performed according to hydrodynamic model. The corrected waterlines labeled elevations were used to construct Triangulated Irregular Networks (TINs). Then an interpolation for each grid elevation was performed in accordance with the associated triangle. This initial DEM, produced using the corrected waterline set, was then used to refine the topography in the intertidal zone, and the model was re-run to produce improved water levels and a new DEM. This procedure was iterated by comparing modeled and actual waterlines until no further improvement occurred. Three DEMs of different intervals were built by this approach and were compared to evaluate the effect of Deep Water Channel Project (DWCP) at the north of Jiuduansha Island. Waterline method combined with numerical model, is an effective tool for constructing digital elevation model of mudflats. The result can provide invaluable information for coastal land use and engineer construction.

  16. Aerodynamic and hydrodynamic model tests of the Enserch Garden Banks floating production facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, E.W.; Bauer, T.C.; Kelly, P.J.

    1995-12-01

    This paper presents the results of aerodynamic and hydrodynamic model tests of the Enserch Garden Banks, a semisubmersible Floating Production Facility (FPF) moored in 2,190-ft waters. During the wind tunnel tests, the steady component of wind and current forces/moments at various skew and heel axes were measured. The results were compared and calibrated against analytical calculations using techniques recommended by ABS and API. During the wave basin recommend test the mooring line tensions and vessel motions including the effects of dynamic wind and current were measured. An analytical calculation of the airgap, vessel motions, and mooring line loads were comparedmore » with wave basin model test results. This paper discusses the test objectives, test setups and agendas for wind and wave basin testing of a deepwater permanently moored floating production system. The experience from these tests and the comparison of measured tests results with analytical calculations will be of value to designers and operators contemplating the use of a semisubmersible based floating production system. The analysis procedures are aimed at estimating (1) vessel motions, (2) airgap, and (3) mooring line tensions with reasonable accuracy. Finally, this paper demonstrates how the model test results were interpolated and adapted in the design loop.« less

  17. Influence of the Aral Sea negative water balance on its seasonal circulation and ventilation patterns: use of a 3d hydrodynamic model.

    NASA Astrophysics Data System (ADS)

    Sirjacobs, D.; Grégoire, M.; Delhez, E.; Nihoul, J.

    2003-04-01

    Within the context of the EU INCO-COPERNICUS program "Desertification in the Aral Sea Region: A study of the Natural and Anthropogenic Impacts" (Contract IAC2-CT-2000-10023), a large-scale 3D hydrodynamic model was adapted to address specifically the macroscale processes affecting the Aral Sea water circulation and ventilation. The particular goal of this research is to simulate the effect of lasting negative water balance on the 3D seasonal circulation, temperature, salinity and water-mixing fields of the Aral Sea. The original Aral Sea seasonal hydrodynamism is simulated with the average seasonal forcings corresponding to the period from 1956 to 1960. This first investigation concerns a period of relative stability of the water balance, before the beginning of the drying process. The consequences of the drying process on the hydrodynamic of the Sea will be studied by comparing this first results with the simulation representing the average situation for the years 1981 to 1985, a very low river flow period. For both simulation periods, the forcing considered are the seasonal fluctuations of wind fields, precipitation, evaporation, river discharge and salinity, cloud cover, air temperature and humidity. The meteorological forcings were adapted to the common optimum one-month temporal resolution of the available data sets. Monthly mean kinetic energy flux and surface tensions were calculated from daily ECMWF wind data. Monthly in situ precipitation, surface air temperature and humidity fields were interpolated from data obtained from the Russian Hydrological and Meteorological Institute. Monthly water discharge and average salinity of the river water were considered for both Amu Darya and Syr Darya river over each simulation periods. The water mass conservation routines allowed the simulation of a changing coastline by taking into account local drying and flooding events of particular grid points. Preliminary barotropic runs were realised (for the 1951-1960 situation, before drying up began) in order to get a first experience of the behaviour of the hydrodynamic model. These first runs provide results about the evolution of the following state variables: elevation of the sea surface, 3D fields of vertical and horizontal flows, 2D fields of average horizontal flows and finally the 3D fields of turbulent kinetic energy. The mean seasonal salinity and temperature fields (in-situ data gathered by the Russian Hydrological and Meteorological Institute) are available for the two simulated periods and will allow a first validation of the hydrodynamic model. Various satellites products were identified, collected and processed in the frame of this research project and will be used for the validation of the model outputs. Seasonal level changes measurements derived from water table change will serve for water balance validation and sea surface temperature for hydrodynamics validation.

  18. Globally aligned states and hydrodynamic traffic jams in confined suspensions of active asymmetric particles.

    PubMed

    Lefauve, Adrien; Saintillan, David

    2014-02-01

    Strongly confined active liquids are subject to unique hydrodynamic interactions due to momentum screening and lubricated friction by the confining walls. Using numerical simulations, we demonstrate that two-dimensional dilute suspensions of fore-aft asymmetric polar swimmers in a Hele-Shaw geometry can exhibit a rich variety of novel phase behaviors depending on particle shape, including coherent polarized density waves with global alignment, persistent counterrotating vortices, density shocks and rarefaction waves. We also explain these phenomena using a linear stability analysis and a nonlinear traffic flow model, both derived from a mean-field kinetic theory.

  19. RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re’em

    2015-02-01

    We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less

  20. Sensitivity analysis of a data assimilation technique for hindcasting and forecasting hydrodynamics of a complex coastal water body

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Hartnett, Michael

    2017-02-01

    Accurate forecasting of coastal surface currents is of great economic importance due to marine activities such as marine renewable energy and fish farms in coastal regions in recent twenty years. Advanced oceanographic observation systems such as satellites and radars can provide many parameters of interest, such as surface currents and waves, with fine spatial resolution in near real time. To enhance modelling capability, data assimilation (DA) techniques which combine the available measurements with the hydrodynamic models have been used since the 1990s in oceanography. Assimilating measurements into hydrodynamic models makes the original model background states follow the observation trajectory, then uses it to provide more accurate forecasting information. Galway Bay is an open, wind dominated water body on which two coastal radars are deployed. An efficient and easy to implement sequential DA algorithm named Optimal Interpolation (OI) was used to blend radar surface current data into a three-dimensional Environmental Fluid Dynamics Code (EFDC) model. Two empirical parameters, horizontal correlation length and DA cycle length (CL), are inherent within OI. No guidance has previously been published regarding selection of appropriate values of these parameters or how sensitive OI DA is to variations in their values. Detailed sensitivity analysis has been performed on both of these parameters and results presented. Appropriate value of DA CL was examined and determined on producing the minimum Root-Mean-Square-Error (RMSE) between radar data and model background states. Analysis was performed to evaluate assimilation index (AI) of using an OI DA algorithm in the model. AI of the half-day forecasting mean vectors' directions was over 50% in the best assimilation model. The ability of using OI to improve model forecasts was also assessed and is reported upon.

  1. Channel-shoal morphodynamics in response to distinct hydrodynamic drivers at the outer Weser estuary

    NASA Astrophysics Data System (ADS)

    Herrling, Gerald; Benninghoff, Markus; Zorndt, Anna; Winter, Christian

    2017-04-01

    The interaction of tidal, wave and wind forces primarily governs the morphodynamics of intertidal channel-shoal systems. Typical morphological changes comprise tidal channel meandering and/or migration with related shoal erosion or accretion. These intertidal flat systems are likely to response to accelerated sea level rise and to potential changes in storm frequency and direction. The aim of the ongoing research project is an evaluation of outer estuarine channel-shoal dynamics by combining the analysis of morphological monitoring data with high-resolution morphodynamic modelling. A focus is set on their evolution in reaction to different hydrodynamic forcings like tides, wind driven currents, waves under fair-weather and high energy conditions, and variable upstream discharges. As an example the Outer Weser region was chosen, and a tidal channel system serves as a reference site: Availability of almost annual bathymetrical observations of an approx. 10 km long tidal channel (Fedderwarder Priel) and its morphological development largely independent from maintenance dredging of the main Weser navigational channel make this tributary an ideal study area. The numerical modelling system Delft3D (Deltares) is applied to run real-time annual scenario simulations aiming to evaluate and to differentiate the morphological responses to distinct hydrodynamic drivers. A comprehensive morphological analysis of available observations at the FWP showed that the channel migration trends and directions are persistent at particular channel bends and meanders for the considered period of 14 years. Migration trends and directions are well reproduced by one-year model simulations. Morphodynamic modelling is applied to interpolate between observations and relate sediment dynamics to different forcing scenarios in the outer Weser estuary as a whole and at the scale of local tributary channels and flats.

  2. A Meshless Method for Magnetohydrodynamics and Applications to Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    McNally, Colin P.

    2012-08-01

    This thesis presents an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. The code has been parallelized by adapting the framework provided by Gadget-2. A set of standard test problems, including one part in a million amplitude linear MHD waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities are presented. Finally we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. We provide a rigorous methodology for verifying a numerical method on two dimensional Kelvin-Helmholtz instability. The test problem was run in the Pencil Code, Athena, Enzo, NDSPHMHD, and Phurbas. A strict comparison, judgment, or ranking, between codes is beyond the scope of this work, although this work provides the mathematical framewor! k needed for such a study. Nonetheless, how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of Smoothed Particle Hydrodynamics. We then comment on the connection between this behavior and the underlying lack of zeroth-order consistency in Smoothed Particle Hydrodynamics interpolation. In astrophysical magnetohydrodynamics (MHD) and electrodynamics simulations, numerically enforcing the divergence free constraint on the magnetic field has been difficult. We observe that for point-based discretization, as used in finite-difference type and pseudo-spectral methods, the divergence free constraint can be satisfied entirely by a choice of interpolation used to define the derivatives of the magnetic field. As an example we demonstrate a new class of finite-difference type derivative operators on a regular grid which has the divergence free property. This principle clarifies the nature of magnetic monopole errors. The principles and techniques demonstrated in this chapter are particularly useful for the magnetic field, but can be applied to any vector field. Finally, we examine global zoom-in simulations of turbulent magnetorotationally unstable flow. We extract and analyze the high-current regions produced in the turbulent flow. Basic parameters of these regions are abstracted, and we build one dimensional models including non-ideal MHD, and radiative transfer. For sufficiently high temperatures, an instability resulting from the temperature dependence of the Ohmic resistivity is found. This instability concentrates current sheets, resulting in the possibility of rapid heating from temperatures on the order of 600 Kelvin to 2000 Kelvin in magnetorotationally turbulent regions of protoplanetary disks. This is a possible local mechanism for the melting of chondrules and the formation of other high-temperature materials in protoplanetary disks.

  3. Ocean Tides. Part 2. A Hydrodynamical Interpolation Model

    DTIC Science & Technology

    1980-01-01

    bslaow 36 32.3W, 04.70W 3w9 Zahel (1970 I) StL Geoge islan 36 SL4ft 6470W 3M Ze~to at . (IOS) 00" a" 0GO I N 36 3UNP.U0 11 J . T . Kuo Lette (1077) am0...a, = 0 ff 106-* 0; {Us1U, s=0 for AC. Ul > 0, (6b) us = 0, Vh = A.+,.. otherwise; v= zlVi = 0 for A t + J ,+ < 0, 6) v i 0, Vi = BL. otherwise; and V2...and { AC’J+ - w4 I/C" for fV& O, [w =C/~fr~O (8a)0 for T = 0 with the second control limit IwI < ki (8b) where (see Equation 3) C = C. [ U j + U2 U4

  4. Introduction to Geostatistics

    NASA Astrophysics Data System (ADS)

    Kitanidis, P. K.

    1997-05-01

    Introduction to Geostatistics presents practical techniques for engineers and earth scientists who routinely encounter interpolation and estimation problems when analyzing data from field observations. Requiring no background in statistics, and with a unique approach that synthesizes classic and geostatistical methods, this book offers linear estimation methods for practitioners and advanced students. Well illustrated with exercises and worked examples, Introduction to Geostatistics is designed for graduate-level courses in earth sciences and environmental engineering.

  5. Extracting Hydrologic Understanding from the Unique Space-time Sampling of the Surface Water and Ocean Topography (SWOT) Mission

    NASA Astrophysics Data System (ADS)

    Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.

    2017-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.

  6. Smoothed Particle Hydrodynamics Simulations of Ultrarelativistic Shocks with Artificial Viscosity

    NASA Astrophysics Data System (ADS)

    Siegler, S.; Riffert, H.

    2000-03-01

    We present a fully Lagrangian conservation form of the general relativistic hydrodynamic equations for perfect fluids with artificial viscosity in a given arbitrary background spacetime. This conservation formulation is achieved by choosing suitable Lagrangian time evolution variables, from which the generic fluid variables of rest-mass density, 3-velocity, and thermodynamic pressure have to be determined. We present the corresponding equations for an ideal gas and show the existence and uniqueness of the solution. On the basis of the Lagrangian formulation we have developed a three-dimensional general relativistic smoothed particle hydrodynamics (SPH) code using the standard SPH formalism as known from nonrelativistic fluid dynamics. One-dimensional simulations of a shock tube and a wall shock are presented together with a two-dimensional test calculation of an inclined shock tube. With our method we can model ultrarelativistic fluid flows including shocks with Lorentz factors of even 1000.

  7. Construction of hydrodynamic bead models from high-resolution X-ray crystallographic or nuclear magnetic resonance data.

    PubMed Central

    Byron, O

    1997-01-01

    Computer software such as HYDRO, based upon a comprehensive body of theoretical work, permits the hydrodynamic modeling of macromolecules in solution, which are represented to the computer interface as an assembly of spheres. The uniqueness of any satisfactory resultant model is optimized by incorporating into the modeling procedure the maximal possible number of criteria to which the bead model must conform. An algorithm (AtoB, for atoms to beads) that permits the direct construction of bead models from high resolution x-ray crystallographic or nuclear magnetic resonance data has now been formulated and tested. Models so generated then act as informed starting estimates for the subsequent iterative modeling procedure, thereby hastening the convergence to reasonable representations of solution conformation. Successful application of this algorithm to several proteins shows that predictions of hydrodynamic parameters, including those concerning solvation, can be confirmed. PMID:8994627

  8. A fast simulation method for radiation maps using interpolation in a virtual environment.

    PubMed

    Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun

    2018-05-10

    In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.

  9. A hydrodynamic microchip for formation of continuous cell chains

    NASA Astrophysics Data System (ADS)

    Khoshmanesh, Khashayar; Zhang, Wei; Tang, Shi-Yang; Nasabi, Mahyar; Soffe, Rebecca; Tovar-Lopez, Francisco J.; Rajadas, Jayakumar; Mitchell, Arnan

    2014-05-01

    Here, we demonstrate the unique features of a hydrodynamic based microchip for creating continuous chains of model yeast cells. The system consists of a disk shaped microfluidic structure, containing narrow orifices that connect the main channel to an array of spoke channels. Negative pressure provided by a syringe pump draws fluid from the main channel through the narrow orifices. After cleaning process, a thin layer of water is left between the glass substrate and the polydimethylsiloxane microchip, enabling leakage beneath the channel walls. A mechanical clamp is used to adjust the operation of the microchip. Relaxing the clamp allows leakage of liquid beneath the walls in a controllable fashion, leading to formation of a long cell chain evenly distributed along the channel wall. The unique features of the microchip are demonstrated by creating long chains of yeast cells and model 15 μm polystyrene particles along the side wall and analysing the hydrogen peroxide induced death of patterned cells.

  10. Estimate of Shock-Hugoniot Adiabat of Liquids from Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bouton, E.; Vidal, P.

    2007-12-01

    Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperatures ranging from 250 K to 360 K.

  11. Hydraulic and geomorphic monitoring of experimental bridge scour mitigation at selected bridges in Utah, 2003-05

    USGS Publications Warehouse

    Kenney, Terry A.; McKinney, Tim S.

    2006-01-01

    Unique bridge scour mitigation designs using concrete A-Jacks were developed by the Utah Department of Transportation and installed at the Colorado River Bridge at State Road 191 and the Green River Bridge at State Road 19. The U.S. Geological Survey monitored stream reaches at these sites by collecting streambed-topography and water-velocity data from 2003 through 2005. These data were acquired annually from a moving boat with an acoustic Doppler current profiler and a differential global positioning system. Raw unordered data were processed and readied for interpolation into organized datasets with DopplerMacros, a set of computer programs. Processed streambed topography data were geostatistically interpolated by using Ordinary Kriging, and inverse distance weighting interpolation was used in the development of the two-dimensional velocity datasets. These organized datasets of topography and velocity were developed for each survey of the two bridge sites. A comparison of the riverbed topography data for each survey was done. An increase in bed elevation related to the installation of the A-Jacks scour countermeasures is evident at the Colorado River Bridge at State Road 191. The three topographic datasets acquired after the installation at the Green River Bridge at State Road 19 show few changes.

  12. Galactic evolution of oxygen. OH lines in 3D hydrodynamical model atmospheres

    NASA Astrophysics Data System (ADS)

    González Hernández, J. I.; Bonifacio, P.; Ludwig, H.-G.; Caffau, E.; Behara, N. T.; Freytag, B.

    2010-09-01

    Context. Oxygen is the third most common element in the Universe. The measurement of oxygen lines in metal-poor unevolved stars, in particular near-UV OH lines, can provide invaluable information about the properties of the Early Galaxy. Aims: Near-UV OH lines constitute an important tool to derive oxygen abundances in metal-poor dwarf stars. Therefore, it is important to correctly model the line formation of OH lines, especially in metal-poor stars, where 3D hydrodynamical models commonly predict cooler temperatures than plane-parallel hydrostatic models in the upper photosphere. Methods: We have made use of a grid of 52 3D hydrodynamical model atmospheres for dwarf stars computed with the code CO5BOLD, extracted from the more extended CIFIST grid. The 52 models cover the effective temperature range 5000-6500 K, the surface gravity range 3.5-4.5 and the metallicity range -3 < [Fe/H] < 0. Results: We determine 3D-LTE abundance corrections in all 52 3D models for several OH lines and ion{Fe}{i} lines of different excitation potentials. These 3D-LTE corrections are generally negative and reach values of roughly -1 dex (for the OH 3167 with excitation potential of approximately 1 eV) for the higher temperatures and surface gravities. Conclusions: We apply these 3D-LTE corrections to the individual O abundances derived from OH lines of a sample the metal-poor dwarf stars reported in Israelian et al. (1998, ApJ, 507, 805), Israelian et al. (2001, ApJ, 551, 833) and Boesgaard et al. (1999, AJ, 117, 492) by interpolating the stellar parameters of the dwarfs in the grid of 3D-LTE corrections. The new 3D-LTE [O/Fe] ratio still keeps a similar trend as the 1D-LTE, i.e., increasing towards lower [Fe/H] values. We applied 1D-NLTE corrections to 3D ion{Fe}{i} abundances and still see an increasing [O/Fe] ratio towards lower metallicites. However, the Galactic [O/Fe] ratio must be revisited once 3D-NLTE corrections become available for OH and Fe lines for a grid of 3D hydrodynamical model atmospheres.

  13. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  14. Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model

    NASA Astrophysics Data System (ADS)

    Wilson, G. W.; Ã-Zkan-Haller, H. T.; Holman, R. A.

    2010-12-01

    A methodology is described for assimilating observations in a steady state two-dimensional horizontal (2-DH) model of nearshore hydrodynamics (waves and currents), using an ensemble-based statistical estimator. In this application, we treat bathymetry as a model parameter, which is subject to a specified prior uncertainty. The statistical estimator uses state augmentation to produce posterior (inverse, updated) estimates of bathymetry, wave height, and currents, as well as their posterior uncertainties. A case study is presented, using data from a 2-D array of in situ sensors on a natural beach (Duck, NC). The prior bathymetry is obtained by interpolation from recent bathymetric surveys; however, the resulting prior circulation is not in agreement with measurements. After assimilating data (significant wave height and alongshore current), the accuracy of modeled fields is improved, and this is quantified by comparing with observations (both assimilated and unassimilated). Hence, for the present data, 2-DH bathymetric uncertainty is an important source of error in the model and can be quantified and corrected using data assimilation. Here the bathymetric uncertainty is ascribed to inadequate temporal sampling; bathymetric surveys were conducted on a daily basis, but bathymetric change occurred on hourly timescales during storms, such that hydrodynamic model skill was significantly degraded. Further tests are performed to analyze the model sensitivities used in the assimilation and to determine the influence of different observation types and sampling schemes.

  15. Stellar models with calibrated convection and temperature stratification from 3D hydrodynamics simulations

    NASA Astrophysics Data System (ADS)

    Mosumgaard, Jakob Rørsted; Ball, Warrick H.; Aguirre, Víctor Silva; Weiss, Achim; Christensen-Dalsgaard, Jørgen

    2018-06-01

    Stellar evolution codes play a major role in present-day astrophysics, yet they share common simplifications related to the outer layers of stars. We seek to improve on this by the use of results from realistic and highly detailed 3D hydrodynamics simulations of stellar convection. We implement a temperature stratification extracted directly from the 3D simulations into two stellar evolution codes to replace the simplified atmosphere normally used. Our implementation also contains a non-constant mixing-length parameter, which varies as a function of the stellar surface gravity and temperature - also derived from the 3D simulations. We give a detailed account of our fully consistent implementation and compare to earlier works, and also provide a freely available MESA-module. The evolution of low-mass stars with different masses is investigated, and we present for the first time an asteroseismic analysis of a standard solar model utilising calibrated convection and temperature stratification from 3D simulations. We show that the inclusion of 3D results have an almost insignificant impact on the evolution and structure of stellar models - the largest effect are changes in effective temperature of order 30 K seen in the pre-main sequence and in the red-giant branch. However, this work provides the first step for producing self-consistent evolutionary calculations using fully incorporated 3D atmospheres from on-the-fly interpolation in grids of simulations.

  16. Method based on the Laplace equations to reconstruct the river terrain for two-dimensional hydrodynamic numerical modeling

    NASA Astrophysics Data System (ADS)

    Lai, Ruixun; Wang, Min; Yang, Ming; Zhang, Chao

    2018-02-01

    The accuracy of the widely-used two-dimensional hydrodynamic numerical model depends on the quality of the river terrain model, particularly in the main channel. However, in most cases, the bathymetry of the river channel is difficult or expensive to obtain in the field, and there is a lack of available data to describe the geometry of the river channel. We introduce a method that originates from the grid generation with the elliptic equation to generate streamlines of the river channel. The streamlines are numerically solved with the Laplace equations. In the process, streamlines in the physical domain are first computed in a computational domain, and then transformed back to the physical domain. The interpolated streamlines are integrated with the surrounding topography to reconstruct the entire river terrain model. The approach was applied to a meandering reach in the Qinhe River, which is a tributary in the middle of the Yellow River, China. Cross-sectional validation and the two-dimensional shallow-water equations are used to test the performance of the river terrain generated. The results show that the approach can reconstruct the river terrain using the data from measured cross-sections. Furthermore, the created river terrain can maintain a geometrical shape consistent with the measurements, while generating a smooth main channel. Finally, several limitations and opportunities for future research are discussed.

  17. Spacecraft Orbit Anomaly Representation Using Thrust-Fourier-Coefficients with Orbit Determination Toolbox

    NASA Astrophysics Data System (ADS)

    Ko, H.; Scheeres, D.

    2014-09-01

    Representing spacecraft orbit anomalies between two separate states is a challenging but an important problem in achieving space situational awareness for an active spacecraft. Incorporation of such a capability could play an essential role in analyzing satellite behaviors as well as trajectory estimation of the space object. A general way to deal with the anomaly problem is to add an estimated perturbing acceleration such as dynamic model compensation (DMC) into an orbit determination process based on pre- and post-anomaly tracking data. It is a time-consuming numerical process to find valid coefficients to compensate for unknown dynamics for the anomaly. Even if the orbit determination filter with DMC can crudely estimate an unknown acceleration, this approach does not consider any fundamental element of the unknown dynamics for a given anomaly. In this paper, a new way of representing a spacecraft anomaly using an interpolation technique with the Thrust-Fourier-Coefficients (TFCs) is introduced and several anomaly cases are studied using this interpolation method. It provides a very efficient way of reconstructing the fundamental elements of the dynamics for a given spacecraft anomaly. Any maneuver performed by a satellite transitioning between two arbitrary orbital states can be represented as an equivalent maneuver using an interpolation technique with the TFCs. Given unconnected orbit states between two epochs due to a spacecraft anomaly, it is possible to obtain a unique control law using the TFCs that is able to generate the desired secular behavior for the given orbital changes. This interpolation technique can capture the fundamental elements of combined unmodeled anomaly events. The interpolated orbit trajectory, using the TFCs compensating for a given anomaly, can be used to improve the quality of orbit fits through the anomaly period and therefore help to obtain a good orbit determination solution after the anomaly. Orbit Determination Toolbox (ODTBX) is modified to adapt this technique in order to verify the performance of this interpolation approach. Spacecraft anomaly cases are based on either single or multiple low or high thrust maneuvers and the unknown thrust accelerations are recovered and compared with the true thrust acceleration. The advantage of this approach is to easily append TFCs and its dynamics to the pre-built ODTBX, which enables us to blend post-anomaly tracking data to improve the performance of the interpolation representation in the absence of detailed information about a maneuver. It allows us to improve space situational awareness in the areas of uncertainty propagation, anomaly characterization and track correlation.

  18. Kelvin-Mach Wake in a Two-Dimensional Fermi Sea

    NASA Astrophysics Data System (ADS)

    Kolomeisky, Eugene B.; Straley, Joseph P.

    2018-06-01

    The dispersion law for plasma oscillations in a two-dimensional electron gas in the hydrodynamic approximation interpolates between Ω ∝√{q } and Ω ∝q dependences as the wave vector q increases. As a result, downstream of a charged impurity in the presence of a uniform supersonic electric current flow, a wake pattern of induced charge density and potential is formed whose geometry is controlled by the Mach number M . For 1 √{2 }. These wakes also trail an external charge, traveling supersonically, a fixed distance away from the electron gas.

  19. Confinement effect on the dynamics of non-equilibrium concentration fluctuations far from the onset of convection.

    PubMed

    Giraudet, Cédric; Bataller, Henri; Sun, Yifei; Donev, Aleksandar; Ortiz de Zárate, José M; Croccolo, Fabrizio

    2016-12-01

    In a recent letter (C. Giraudet et al., EPL 111, 60013 (2015)) we reported preliminary data showing evidence of a slowing-down of non-equilibrium fluctuations of the concentration in thermodiffusion experiments on a binary mixture of miscible fluids. The reason for this slowing-down was attributed to the effect of confinement. Such tentative explanation is here experimentally corroborated by new measurements and theoretically substantiated by studying analytically and numerically the relevant fluctuating hydrodynamics equations. In the new experiments presented here, the magnitude of the temperature gradient is changed, confirming that the system is controlled solely by the solutal Rayleigh number, and that the slowing-down is dominated by a combined effect of the driving force of buoyancy, the dissipating force of diffusion and the confinement provided by the vertical extension of the sample cell. Moreover, a compact phenomenological interpolating formula is proposed for easy analysis of experimental results.

  20. Parametrizing the Reionization History with the Redshift Midpoint, Duration, and Asymmetry

    NASA Astrophysics Data System (ADS)

    Trac, Hy

    2018-05-01

    A new parametrization of the reionization history is presented to facilitate robust comparisons between different observations and with theory. The evolution of the ionization fraction with redshift can be effectively captured by specifying the midpoint, duration, and asymmetry parameters. Lagrange interpolating functions are then used to construct analytical curves that exactly fit corresponding ionization points. The shape parametrizations are excellent matches to theoretical results from radiation-hydrodynamic simulations. The comparative differences for reionization observables are: ionization fraction | {{Δ }}{x}{{i}}| ≲ 0.03, 21 cm brightness temperature | {{Δ }}{T}{{b}}| ≲ 0.7 {mK}, Thomson optical depth | {{Δ }}τ | ≲ 0.001, and patchy kinetic Sunyaev–Zel’dovich angular power | {{Δ }}{D}{\\ell }| ≲ 0.1 μ {{{K}}}2. This accurate and flexible approach will allow parameter-space studies and self-consistent constraints on the reionization history from 21 cm, cosmic microwave background (CMB), and high-redshift galaxies and quasars.

  1. Collective modes of an imbalanced unitary Fermi gas

    NASA Astrophysics Data System (ADS)

    Hofmann, Johannes; Chevy, Frédéric; Goulko, Olga; Lobo, Carlos

    2018-03-01

    We study theoretically the collective mode spectrum of a strongly imbalanced two-component unitary Fermi gas in a cigar-shaped trap, where the minority species forms a gas of polarons. We describe the collective breathing mode of the gas in terms of the Fermi-liquid kinetic equation taking collisions into account using the method of moments. Our results for the frequency and damping of the longitudinal in-phase breathing mode are in good quantitative agreement with an experiment by Nascimbène et al. [Phys. Rev. Lett. 103, 170402 (2009), 10.1103/PhysRevLett.103.170402] and interpolate between a hydrodynamic and a collisionless regime as the polarization is increased. A separate out-of phase breathing mode, which for a collisionless gas is sensitive to the effective mass of the polaron, however, is strongly damped at finite temperature, whereas the experiment observes a well-defined oscillation.

  2. Electrochemical Quartz Crystal Microbalance with Dissipation Real-Time Hydrodynamic Spectroscopy of Porous Solids in Contact with Liquids.

    PubMed

    Sigalov, Sergey; Shpigel, Netanel; Levi, Mikhael D; Feldberg, Moshe; Daikhin, Leonid; Aurbach, Doron

    2016-10-18

    Using multiharmonic electrochemical quartz crystal microbalance with dissipation (EQCM-D) monitoring, a new method of characterization of porous solids in contact with liquids has been developed. The dynamic gravimetric information on the growing, dissolving, or stationary stored solid deposits is supplemented by their precise in-operando porous structure characterization on a mesoscopic scale. We present a very powerful method of quartz-crystal admittance modeling of hydrodynamic solid-liquid interactions in order to extract the porous structure parameters of solids during their formation in real time, using different deposition modes. The unique hydrodynamic spectroscopic characterization of electrolytic and rf-sputtered solid Cu coatings that we use for our "proof of concept" provides a new strategy for probing various electrochemically active thin and thick solid deposits, thereby offering inexpensive, noninvasive, and highly efficient quantitative control over their properties. A broad spectrum of applications of our method is proposed, from various metal electroplating and finishing technologies to deeper insight into dynamic build-up and subsequent development of solid-electrolyte interfaces in the operation of Li-battery electrodes, as well as monitoring hydrodynamic consequences of metal corrosion, and growth of biomass coatings (biofouling) on different solid surfaces in seawater.

  3. The Hydrodynamics and Odorant Transport Phenomena of Olfaction in the Hammerhead Shark

    NASA Astrophysics Data System (ADS)

    Rygg, Alex; Craven, Brent

    2013-11-01

    The hammerhead shark possesses a unique head morphology that is thought to facilitate enhanced olfactory performance. The olfactory organs, located at the distal ends of the cephalofoil, contain numerous lamellae that increase the surface area for olfaction. Functionally, for the shark to detect chemical stimuli, water-borne odors must reach the olfactory sensory epithelium that lines these lamellae. Thus, odorant transport from the aquatic environment to the sensory epithelium is the first critical step in olfaction. Here we investigate the hydrodynamics and odorant transport phenomena of olfaction in the hammerhead shark based on an anatomically-accurate reconstruction of the head and olfactory chamber from high-resolution micro-CT and MRI scans of a cadaver specimen. Computational fluid dynamics (CFD) simulations of water flow in the reconstructed model reveal the external and internal hydrodynamics of olfaction during swimming. Odorant transport in the olfactory organ is investigated using a multi-scale approach, whereby molecular dynamics (MD) simulations are used to calculate odorant partition coefficients that are subsequently utilized in macro-scale CFD simulations of odorant deposition. The hydrodynamic and odorant transport results are used to elucidate several important features of olfactory function in the hammerhead shark.

  4. Realization of hydrodynamic experiments on quasi-2D liquid crystal films in microgravity

    NASA Astrophysics Data System (ADS)

    Clark, Noel A.; Eremin, Alexey; Glaser, Matthew A.; Hall, Nancy; Harth, Kirsten; Klopp, Christoph; Maclennan, Joseph E.; Park, Cheol S.; Stannarius, Ralf; Tin, Padetha; Thurmes, William N.; Trittel, Torsten

    2017-08-01

    Freely suspended films of smectic liquid crystals are unique examples of quasi two-dimensional fluids. Mechanically stable and with quantized thickness of the order of only a few molecular layers, smectic films are ideal systems for studying fundamental fluid physics, such as collective molecular ordering, defect and fluctuation phenomena, hydrodynamics, and nonequilibrium behavior in two dimensions (2D), including serving as models of complex biological membranes. Smectic films can be drawn across openings in planar supports resulting in thin, meniscus-bounded membranes, and can also be prepared as bubbles, either supported on an inflation tube or floating freely. The quantized layering renders smectic films uniquely useful in 2D fluid physics. The OASIS team has pursued a variety of ground-based and microgravity applications of thin liquid crystal films to fluid structure and hydrodynamic problems in 2D and quasi-2D systems. Parabolic flights and sounding rocket experiments were carried out in order to explore the shape evolution of free floating smectic bubbles, and to probe Marangoni effects in flat films. The dynamics of emulsions of smectic islands (thicker regions on thin background films) and of microdroplet inclusions in spherical films, as well as thermocapillary effects, were studied over extended periods within the OASIS (Observation and Analysis of Smectic Islands in Space) project on the International Space Station. We summarize the technical details of the OASIS hardware and give preliminary examples of key observations.

  5. Great hammerhead sharks swim on their side to reduce transport costs

    PubMed Central

    Payne, Nicholas L.; Iosilevskii, Gil; Barnett, Adam; Fischer, Chris; Graham, Rachel T.; Gleiss, Adrian C.; Watanabe, Yuuki Y.

    2016-01-01

    Animals exhibit various physiological and behavioural strategies for minimizing travel costs. Fins of aquatic animals play key roles in efficient travel and, for sharks, the functions of dorsal and pectoral fins are considered well divided: the former assists propulsion and generates lateral hydrodynamic forces during turns and the latter generates vertical forces that offset sharks' negative buoyancy. Here we show that great hammerhead sharks drastically reconfigure the function of these structures, using an exaggerated dorsal fin to generate lift by swimming rolled on their side. Tagged wild sharks spend up to 90% of time swimming at roll angles between 50° and 75°, and hydrodynamic modelling shows that doing so reduces drag—and in turn, the cost of transport—by around 10% compared with traditional upright swimming. Employment of such a strongly selected feature for such a unique purpose raises interesting questions about evolutionary pathways to hydrodynamic adaptations, and our perception of form and function. PMID:27457414

  6. Optical chromatographic sample separation of hydrodynamically focused mixtures

    PubMed Central

    Terray, A.; Hebert, C. G.; Hart, S. J.

    2014-01-01

    Optical chromatography relies on the balance between the opposing optical and fluid drag forces acting on a particle. A typical configuration involves a loosely focused laser directly counter to the flow of particle-laden fluid passing through a microfluidic device. This equilibrium depends on the intrinsic properties of the particle, including size, shape, and refractive index. As such, uniquely fine separations are possible using this technique. Here, we demonstrate how matching the diameter of a microfluidic flow channel to that of the focusing laser in concert with a unique microfluidic platform can be used as a method to fractionate closely related particles in a mixed sample. This microfluidic network allows for a monodisperse sample of both polystyrene and poly(methyl methacrylate) spheres to be injected, hydrodynamically focused, and completely separated. To test the limit of separation, a mixed polystyrene sample containing two particles varying in diameter by less than 0.5 μm was run in the system. The analysis of the resulting separation sets the framework for continued work to perform ultra-fine separations. PMID:25553179

  7. Surface electric fields for North America during historical geomagnetic storms

    USGS Publications Warehouse

    Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.

    2013-01-01

    To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.

  8. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  9. Revisiting the use of the immersed-boundary lattice-Boltzmann method for simulations of suspended particles

    NASA Astrophysics Data System (ADS)

    Mountrakis, L.; Lorenz, E.; Hoekstra, A. G.

    2017-07-01

    The immersed-boundary lattice-Boltzmann method (IB-LBM) is increasingly being used in simulations of dense suspensions. These systems are computationally very expensive and can strongly benefit from lower resolutions that still maintain the desired accuracy for the quantities of interest. IB-LBM has a number of free parameters that have to be defined, often without exact knowledge of the tradeoffs, since their behavior in low resolutions is not well understood. Such parameters are the lattice constant Δ x , the number of vertices Nv, the interpolation kernel ϕ , and the LBM relaxation time τ . We investigate the effect of these IB-LBM parameters on a number of straightforward but challenging benchmarks. The systems considered are (a) the flow of a single sphere in shear flow, (b) the collision of two spheres in shear flow, and (c) the lubrication interaction of two spheres. All benchmarks are performed in three dimensions. The first two systems are used for determining two effective radii: the hydrodynamic radius rhyd and the particle interaction radius rinter. The last system is used to establish the numerical robustness of the lubrication forces, used to probe the hydrodynamic interactions in the limit of small gaps. Our results show that lower spatial resolutions result in larger hydrodynamic and interaction radii, while surface densities should be chosen above two vertices per LU2 result to prevent fluid penetration in underresolved meshes. Underresolved meshes also failed to produce the migration of particles toward the center of the domain due to lift forces in Couette flow, mostly noticeable for IBM-kernel ϕ2. Kernel ϕ4, despite being more robust toward mesh resolution, produces a notable membrane thickness, leading to the breakdown of the lubrication forces in larger gaps, and its use in dense suspensions where the mean particle distances are small can result in undesired behavior. rhyd is measured to be different from rinter, suggesting that there is no consistent measure to recalibrate the radius of the suspended particle.

  10. Three-dimensional hydrodynamic Bondi-Hoyle accretion. 2: Homogeneous medium at Mach 3 with gamma = 5/3

    NASA Technical Reports Server (NTRS)

    Ruffert, Maximilian; Arnett, David

    1994-01-01

    We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing spheres of varying sizes (from 10 down to 0.01 accretion radii) move at Mach 3 relative to a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3). To accommodate the long-range gravitational forces, the extent of the computational volume is 32(exp 3) accretion radii. We examine the influence of numerical procedure on physical behavior. The hydrodynamics is modeled by the 'piecewise parabolic method.' No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (5-10) grids around the sphere, each finer grid being a factor of 2 smaller in zone dimension that the next coarser grid. The largest dynamic range (ratio of size of the largest grid to size of the finest zone) is 16,384. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid, while at the same time evolving the gas on the coarser grids. Initially (at time t = 0-10), a shock front is set up, a Mach cone develops, and the accretion column is observable. Eventually the flow becomes unstable, destroying axisymmetry. This happens approximately when the mass accretion rate reaches the values (+/- 10%) predicted by the Bondi-Hoyle accretion formula (factor of 2 included). However, our three-dimensional models do not show the highly dynamic flip-flop flow so prominent in two-dimensional calculations performed by other authors. The flow, and thus the accretion rate of all quantities, shows quasi-periodic (P approximately equals 5) cycles between quiescent and active states. The interpolation formula proposed in an accompanying paper is found to follow the collected numerical data to within approximately 30%. The specific angular momentum accreted is of the same order of magnitude as the values previously found for two-dimensional flows.

  11. Coupling Hydrodynamic and Wave Propagation Codes for Modeling of Seismic Waves recorded at the SPE Test.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).

  12. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  13. A bivariate rational interpolation with a bi-quadratic denominator

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu

    2006-10-01

    In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.

  14. Hydrodynamic Interactions in Active and Passive Matter

    NASA Astrophysics Data System (ADS)

    Krafnick, Ryan C.

    Active matter is present at all biological length scales, from molecular apparatuses interior to cells, to swimming microscopic organisms, to birds, fish, and people. Its properties are varied and its applications diverse, but our understanding of the fundamental driving forces of systems with these constituents remains incomplete. This thesis examines active matter suspensions, exploring the role of hydrodynamic interactions on the unique and emergent properties therein. Both qualitative and quantitative impacts are considered, and care is taken in determining the physical origin of the results in question. It is found that fluid dynamical interactions are fundamentally, qualitatively important, and much of the properties of a system can be explained with an effective energy density defined via the fluid fields arising from the embedded self-propelling entities themselves.

  15. Design and fabrication of uniquely shaped thiol-ene microfibers using a two-stage hydrodynamic focusing design.

    PubMed

    Boyd, Darryl A; Shields, Adam R; Howell, Peter B; Ligler, Frances S

    2013-08-07

    Microfluidic systems have advantages that are just starting to be realized for materials fabrication. In addition to the more common use for fabrication of particles, hydrodynamic focusing has been used to fabricate continuous polymer fibers. We have previously described such a microfluidics system which has the ability to generate fibers with controlled cross-sectional shapes locked in place by in situ photopolymerization. The previous fiber fabrication studies produced relatively simple round or ribbon shapes, demonstrated the use of a variety of polymers, and described the interaction between sheath-core flow-rate ratios used to control the fiber diameter and the impact on possible shapes. These papers documented the fact that no matter what the intended shape, higher flow-rate ratios produced rounder fibers, even in the absence of interfacial tension between the core and sheath fluids. This work describes how to fabricate the next generation of fibers predesigned to have a much more complex geometry, as exemplified by the "double anchor" shape. Critical to production of the pre-specified fibers with complex features was independent control over both the shape and the size of the fabricated microfibers using a two-stage hydrodynamic focusing system. Design and optimization of the channels was performed using finite element simulations and confocal imaging to characterize each of the two stages theoretically and experimentally. The resulting device design was then used to generate thiol-ene fibers with a unique double anchor shape. Finally, proof-of-principle functional experiments demonstrated the ability of the fibers to transport fluids and to interlock laterally.

  16. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  17. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  18. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  19. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  20. Flow sensing by pinniped whiskers

    PubMed Central

    Miersch, L.; Hanke, W.; Wieskotten, S.; Hanke, F. D.; Oeffner, J.; Leder, A.; Brede, M.; Witte, M.; Dehnhardt, G.

    2011-01-01

    Beside their haptic function, vibrissae of harbour seals (Phocidae) and California sea lions (Otariidae) both represent highly sensitive hydrodynamic receptor systems, although their vibrissal hair shafts differ considerably in structure. To quantify the sensory performance of both hair types, isolated single whiskers were used to measure vortex shedding frequencies produced in the wake of a cylinder immersed in a rotational flow tank. These measurements revealed that both whisker types were able to detect the vortex shedding frequency but differed considerably with respect to the signal-to-noise ratio (SNR). While the signal detected by sea lion whiskers was substantially corrupted by noise, harbour seal whiskers showed a higher SNR with largely reduced noise. However, further analysis revealed that in sea lion whiskers, each noise signal contained a dominant frequency suggested to function as a characteristic carrier signal. While in harbour seal whiskers the unique surface structure explains its high sensitivity, this more or less steady fundamental frequency might represent the mechanism underlying hydrodynamic reception in the fast swimming sea lion by being modulated in response to hydrodynamic stimuli impinging on the hair. PMID:21969689

  1. Event simulation based on three-fluid hydrodynamics for collisions at energies available at the Dubna Nuclotron-based Ion Collider Facility and at the Facility for Antiproton and Ion Research in Darmstadt

    NASA Astrophysics Data System (ADS)

    Batyuk, P.; Blaschke, D.; Bleicher, M.; Ivanov, Yu. B.; Karpenko, Iu.; Merts, S.; Nahrgang, M.; Petersen, H.; Rogachevsky, O.

    2016-10-01

    We present an event generator based on the three-fluid hydrodynamics approach for the early stage of the collision, followed by a particlization at the hydrodynamic decoupling surface to join to a microscopic transport model, ultrarelativistic quantum molecular dynamics, to account for hadronic final-state interactions. We present first results for nuclear collisions of the Facility for Antiproton and Ion Research-Nuclotron-based Ion Collider Facility energy scan program (Au+Au collisions, √{sN N}=4 -11 GeV ). We address the directed flow of protons and pions as well as the proton rapidity distribution for two model equations of state, one with a first-order phase transition and the other with a crossover-type softening at high densities. The new simulation program has the unique feature that it can describe a hadron-to-quark matter transition which proceeds in the baryon stopping regime that is not accessible to previous simulation programs designed for higher energies.

  2. Selective evaporation of focusing fluid in two-fluid hydrodynamic print head.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keicher, David M.; Cook, Adam W.

    The work performed in this project has demonstrated the feasibility to use hydrodynamic focusing of two fluid steams to create a novel micro printing technology for electronics and other high performance applications. Initial efforts focused solely on selective evaporation of the sheath fluid from print stream provided insight in developing a unique print head geometry allowing excess sheath fluid to be separated from the print flow stream for recycling/reuse. Fluid flow models suggest that more than 81 percent of the sheath fluid can be removed without affecting the print stream. Further development and optimization is required to demonstrate this capabilitymore » in operation. Print results using two-fluid hydrodynamic focusing yielded a 30 micrometers wide by 0.5 micrometers tall line that suggests that the cross-section of the printed feature from the print head was approximately 2 micrometers in diameter. Printing results also demonstrated that complete removal of the sheath fluid is not necessary for all material systems. The two-fluid printing technology could enable printing of insulated conductors and clad optical interconnects. Further development of this concept should be pursued.« less

  3. NIF laboratory astrophysics simulations investigating the effects of a radiative shock on hydrodynamic instabilities

    NASA Astrophysics Data System (ADS)

    Angulo, A. A.; Kuranz, C. C.; Drake, R. P.; Huntington, C. M.; Park, H.-S.; Remington, B. A.; Kalantar, D.; MacLaren, S.; Raman, K.; Miles, A.; Trantham, Matthew; Kline, J. L.; Flippo, K.; Doss, F. W.; Shvarts, D.

    2016-10-01

    This poster will describe simulations based on results from ongoing laboratory astrophysics experiments at the National Ignition Facility (NIF) relevant to the effects of radiative shock on hydrodynamically unstable surfaces. The experiments performed on NIF uniquely provide the necessary conditions required to emulate radiative shock that occurs in astrophysical systems. The core-collapse explosions of red supergiant stars is such an example wherein the interaction between the supernova ejecta and the circumstellar medium creates a region susceptible to Rayleigh-Taylor (R-T) instabilities. Radiative and nonradiative experiments were performed to show that R-T growth should be reduced by the effects of the radiative shocks that occur during this core-collapse. Simulations were performed using the radiation hydrodynamics code Hyades using the experimental conditions to find the mean interface acceleration of the instability and then further analyzed in the buoyancy drag model to observe how the material expansion contributes to the mix-layer growth. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas under Grant Number DE-FG52-09NA29548.

  4. Lifelong Learning: The Value of an Industrial Internship for a Graduate Student Education

    ERIC Educational Resources Information Center

    Honda, Gregory S.; Pazmino, Jorge H.; Hickman, Daniel A.; Varma, Arvind

    2015-01-01

    A chemical engineering PhD student from Purdue University completed an internship at The Dow Chemical Company, evaluating the effect of scale on the hydrodynamics of a trickle bed reactor. A unique aspect of this work was that it arose from an ongoing collaboration, so that the project was within the scope of the graduate student's thesis. This…

  5. High Performance Biocomputation

    DTIC Science & Technology

    2005-03-01

    in some other fields (e.g. computational hydrodynamics, lattice quantum chroniodynamics, etc.) but appears wholly inappropriate here as pointed out...restrict the overall conformational space by putting the system on a lattice . These have been used to great effect to study folding kinetics. These...many important problems to be worked on, not a single unique challenge (contrast this to QCD , for example). " almost all problems require significant

  6. Sustained propagation and control of topological excitations in polariton superfluid

    NASA Astrophysics Data System (ADS)

    Pigeon, Simon; Bramati, Alberto

    2017-09-01

    We present a simple method to compensate for losses in a polariton superfluid. Based on a weak support field, it allows for the extended propagation of a resonantly driven polariton superfluid with minimal energetic cost. Moreover, this setup is based on optical bistability and leads to the significant release of the phase constraint imposed by resonant driving. This release, together with macroscopic polariton propagation, offers a unique opportunity to study the hydrodynamics of the topological excitations of polariton superfluids such as quantized vortices and dark solitons. We numerically study how the coherent field supporting the superfluid flow interacts with the vortices and how it can be used to control them. Interestingly, we show that standard hydrodynamics does not apply for this driven-dissipative fluid and new types of behaviour are identified.

  7. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  8. Hydrodynamic profile of young swimmers: changes over a competitive season.

    PubMed

    Barbosa, T M; Morais, J E; Marques, M C; Silva, A J; Marinho, D A; Kee, Y H

    2015-04-01

    The aim of this study was to analyze the changes in the hydrodynamic profile of young swimmers over a competitive season and to compare the variations according to a well-designed training periodization. Twenty-five swimmers (13 boys and 12 girls) were evaluated in (a) October (M1); (b) March (M2); and (c) June (M3). Inertial and anthropometrical measures included body mass, swimmer's added water mass, height, and trunk transverse surface area. Swimming efficiency was estimated by the speed fluctuation, stroke index, and approximate entropy. Active drag was estimated with the velocity perturbation method and the passive drag with the gliding decay method. Hydrodynamic dimensionless numbers (Froude and Reynolds numbers) and hull velocity (i.e., speed at Froude number = 0.42) were also calculated. No variable presented a significant gender effect. Anthropometrics and inertial parameters plus dimensionless numbers increased over time. Swimming efficiency improved between M1 and M3. There was a trend for both passive and active drag increase from M1 to M2, but being lower at M3 than at M1. Intra-individual changes between evaluation moments suggest high between- and within-subject variations. Therefore, hydrodynamic changes over a season occur in a non-linear fashion way, where the interplay between growth and training periodization explain the unique path flow selected by each young swimmer. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  10. EOS Interpolation and Thermodynamic Consistency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammel, J. Tinka

    2015-11-16

    As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.

  11. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    PubMed

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  12. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  13. SU-E-J-238: First-Order Approximation of Time-Resolved 4DMRI From Cine 2DMRI and Respiratory-Correlated 4DMRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Tyagi, N; Deasy, J

    2015-06-15

    Purpose: Cine 2DMRI is useful in MR-guided radiotherapy but it lacks volumetric information. We explore the feasibility of estimating timeresolved (TR) 4DMRI based on cine 2DMRI and respiratory-correlated (RC) 4DMRI though simulation. Methods: We hypothesize that a volumetric image during free breathing can be approximated by interpolation among 3DMRI image sets generated from a RC-4DMRI. Two patients’ RC-4DMRI with 4 or 5 phases were used to generate additional 3DMRI by interpolation. For each patient, six libraries were created to have total 5-to-35 3DMRI images by 0–6 equi-spaced tri-linear interpolation between adjacent and full-inhalation/full-exhalation phases. Sagittal cine 2DMRI were generated frommore » reference 3DMRIs created from separate, unique interpolations from the original RC-4DMRI. To test if accurate 3DMRI could be generated through rigid registration of the cine 2DMRI to the 3DMRI libraries, each sagittal 2DMRI was registered to sagittal cuts in the same location in the 3DMRI within each library to identify the two best matches: one with greater lung volume and one with smaller. A final interpolation between the corresponding 3DMRI was then performed to produce the first-order-approximation (FOA) 3DMRI. The quality and performance of the FOA as a function of library size was assessed using both the difference in lung volume and average voxel intensity between the FOA and the reference 3DMRI. Results: The discrepancy between the FOA and reference 3DMRI decreases as the library size increases. The 3D lung volume difference decreases from 5–15% to 1–2% as the library size increases from 5 to 35 image sets. The average difference in lung voxel intensity decreases from 7–8 to 5–6 with the lung intensity being 0–135. Conclusion: This study indicates that the quality of FOA 3DMRI improves with increasing 3DMRI library size. On-going investigations will test this approach using actual cine 2DMRI and introduce a higher order approximation for improvements. This study is in part supported by NIH (U54CA137788 and U54CA132378)« less

  14. Dusty gas with one fluid in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Laibe, Guillaume; Price, Daniel J.

    2014-05-01

    In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be re-formulated to describe the system as a single-fluid mixture. Here, we present a numerical implementation of the one-fluid dusty gas algorithm using smoothed particle hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalized to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: DUSTYBOX, DUSTYWAVE, DUSTYSHOCK and DUSTYOSCILL, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two-fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time-stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.

  15. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  16. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  17. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  18. Nearest neighbor, bilinear interpolation and bicubic interpolation geographic correction effects on LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.

  19. The Richtmyer-Meshkov Instability on a Circular Interface in Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Black, Wolfgang; Maxon, W. Curtis; Denissen, Nicholas; McFarland, Jacob

    2017-11-01

    Hydrodynamic instabilities (HI) are ubiquitous in high energy density (HED) applications such as astrophysics, thermonuclear weapons, and inertial fusion. In these systems, fluid mixing is encouraged by the HI which can reduce the energy yield and eventually drive the system to equilibrium. The Richtmyer-Meshkov (RM) instability is one such HI and is created when a perturbed interface between a density gradient is impulsively accelerated. The physics can be complicated one step further by the inclusion of Magnetohydrodynamics (MHD), where HED systems experience the effects of magnetic and electric fields. These systems provide unique challenges and as such can be used to validate hydrodynamic codes capable of predicting HI. The work presented here will outline efforts to study the RMI in MHD for a circular interface utilizing the hydrocode FLAG, developed at Los Alamos National Laboratory.

  20. Identifying hydrodynamic interaction effects in tethered polymers in uniform flow.

    PubMed

    Kienle, Diego; Rzehak, Roland; Zimmermann, Walter

    2011-06-01

    Using Brownian dynamics simulations, we investigate how hydrodynamic interaction (HI) affects the behavior of tethered polymers in uniform flow. While it is expected that the HI within the polymer will lead to a dependency of the polymer's drag coefficient on the flow velocity, the interchain HI causes additional screening effects. For the case of two polymers in uniform flow with their tether points a finite distance apart, it is shown that the interchain HI not only causes a further reduction of the drag per polymer with decreasing distance between the tether points but simultaneously induces a polymer-polymer attraction as well. This attraction exhibits a characteristic maximum at intermediate flow velocities when the drag forces are of the order of the entropic forces. The effects uniquely attributed to the presence of HI can be verified experimentally.

  1. Negative local resistance caused by viscous electron backflow in graphene.

    PubMed

    Bandurin, D A; Torre, I; Krishna Kumar, R; Ben Shalom, M; Tomadin, A; Principi, A; Auton, G H; Khestanova, E; Novoselov, K S; Grigorieva, I V; Ponomarenko, L A; Geim, A K; Polini, M

    2016-03-04

    Graphene hosts a unique electron system in which electron-phonon scattering is extremely weak but electron-electron collisions are sufficiently frequent to provide local equilibrium above the temperature of liquid nitrogen. Under these conditions, electrons can behave as a viscous liquid and exhibit hydrodynamic phenomena similar to classical liquids. Here we report strong evidence for this transport regime. We found that doped graphene exhibits an anomalous (negative) voltage drop near current-injection contacts, which is attributed to the formation of submicrometer-size whirlpools in the electron flow. The viscosity of graphene's electron liquid is found to be ~0.1 square meters per second, an order of magnitude higher than that of honey, in agreement with many-body theory. Our work demonstrates the possibility of studying electron hydrodynamics using high-quality graphene. Copyright © 2016, American Association for the Advancement of Science.

  2. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  3. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  5. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  6. The utility of bathymetric echo sounding data in modelling benthic impacts using NewDEPOMOD driven by an FVCOM model.

    NASA Astrophysics Data System (ADS)

    Rochford, Meghan; Black, Kenneth; Aleynik, Dmitry; Carpenter, Trevor

    2017-04-01

    The Scottish Environmental Protection Agency (SEPA) are currently implementing new regulations for consenting developments at new and pre-existing fish farms. Currently, a 15-day current record from multiple depths at one location near the site is required to run DEPOMOD, a depositional model used to determine the depositional footprint of waste material from fish farms, developed by Cromey et al. (2002). The present project involves modifying DEPOMOD to accept data from 3D hydrodynamic models to allow for a more accurate representation of the currents around the farms. Bathymetric data are key boundary conditions for accurate modelling of current velocity data. The aim of the project is to create a script that will use the outputs from FVCOM, a 3D hydrodynamic model developed by Chen et al. (2003), and input them into NewDEPOMOD (a new version of DEPOMOD with more accurately parameterised sediment transport processes) to determine the effect of a fish farm on the surrounding environment. This study compares current velocity data under two scenarios; the first, using interpolated bathymetric data, and the second using bathymetric data collected during a bathymetric echo sounding survey of the site. Theoretically, if the hydrodynamic model is of high enough resolution, the two scenarios should yield relatively similar results. However, the expected result is that the survey data will be of much higher resolution and therefore of better quality, producing more realistic velocity results. The improvement of bathymetric data will also improve sediment transport predictions in NewDEPOMOD. This work will determine the sensitivity of model predictions to bathymetric data accuracy at a range of sites with varying bathymetric complexity and thus give information on the potential costs and benefits of echo sounding survey data inputs. Chen, C., Liu, H. and Beardsley, R.C., 2003. An unstructured grid, finite-volume, three-dimensional, primitive equations ocean model: application to coastal ocean and estuaries. Journal of atmospheric and oceanic technology, 20(1), pp.159-186. Cromey, C.J., Nickell, T.D. and Black, K.D., 2002. DEPOMOD—modelling the deposition and biological effects of waste solids from marine cage farms. Aquaculture, 214(1), pp.211-239.

  7. Experimenting with the GMAO 4D Data Assimilation

    NASA Technical Reports Server (NTRS)

    Todling, R.; El Akkraoui, A.; Errico, R. M.; Guo, J.; Kim, J.; Kliest, D.; Parrish, D. F.; Suarez, M.; Trayanov, A.; Tremolet, Yannick; hide

    2012-01-01

    The Global Modeling and Assimilation Office (GMAO) has been working to promote its prototype four-dimensional variational (4DVAR) system to a version that can be exercised at operationally desirable configurations. Beyond a general circulation model (GeM) and an analysis system, traditional 4DV AR requires availability of tangent linear (TL) and adjoint (AD) models of the corresponding GeM. The GMAO prototype 4DVAR uses the finite-volume-based GEOS GeM and the Grid-point Statistical Interpolation (GSI) system for the first two, and TL and AD models derived ITom an early version of the finite-volume hydrodynamics that is scientifically equivalent to the present GEOS nonlinear GeM but computationally rather outdated. Specifically, the TL and AD models hydrodynamics uses a simple (I-dimensional) latitudinal MPI domain decomposition, which has consequent low scalability and prevents the prototype 4DV AR ITom being used in realistic applications. In the near future, GMAO will be upgrading its operational GEOS GCM (and assimilation system) to use a cubed-sphere-based hydrodynamics. This versions of the dynamics scales to thousands of processes and has led to a decision to re-derive the TL and AD models for this more modern dynamics, thus taking advantage of a two-dimensional MPI decomposition and improved scalability properties. With the aid of the Transformation of Algorithms in FORTRAN (l'AF) automatic adjoint generation tool and some hand-coding, a version of the cubed-sphere-based TL and AD models, with a simplified vertical diffusion scheme, is now available, enabling multiple configurations of standard implementations of 4DV AR in GEOS. Concurrent to this development, collaboration with the National Centers for Environmental Prediction (NCEP) and the Earth System Research Laboratory (ESRL) has allowed GMAO to implement a hybrid-ensemble capability within the GEOS data assimilation system. Both 3Dand 4D-ensemble capabilities are presently available thus allowing GMAO to now evaluate the performance and benefit of various ensemble and variational assimilation strategies. This presentation will cover the most recent developments taking place at GMAO and show results from various comparisons from traditional techniques to more recent ensemble-based ones.

  8. Assessment of Flood Mitigation Solutions Using a Hydrological Model and Refined 2D Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Khuat Duy, B.; Archambeau, P.; Dewals, B. J.; Erpicum, S.; Pirotton, M.

    2009-04-01

    Following recurrent inundation problems on the Berwinne catchment, in Belgium, a combined hydrologic and hydrodynamic study has been carried out in order to find adequate solutions for the floods mitigation. Thanks to detailed 2D simulations, the effectiveness of the solutions can be assessed not only in terms of discharge and height reductions in the river, but also with other aspects such as the inundated surfaces reduction and the decrease of inundated buildings and roads. The study is carried out in successive phases. First, the hydrological runoffs are generated using a physically based and spatially distributed multi-layer model solving depth-integrated equations for overland flow, subsurface flow and baseflow. Real floods events are simulated using rainfall series collected at 8 stations (over 20 years of available data). The hydrological inputs are routed through the river network (and through the sewage network if relevant) with the 1D component of the modelling system, which solves the Saint-Venant equations for both free-surface and pressurized flows in a unified way. On the main part of the river, the measured river cross-sections are included in the modelling, and existing structures along the river (such as bridges, sluices or pipes) are modelled explicitely with specific cross sections. Two gauging stations with over 15 years of continuous measurements allow the calibration of both the hydrologic and hydrodynamic models. Second, the flood mitigation solutions are tested in the simulations in the case of an extreme flooding event, and their effects are assessed using detailed 2D simulations on a few selected sensitive areas. The digital elevation model comes from an airborne laser survey with a spatial resolution of 1 point per square metre and is completed in the river bed with a bathymetry interpolated from cross-section data. The upstream discharge is extracted from the 1D simulation for the selected rainfall event. The study carried out with this methodology allowed to assess the suggested solutions with multiple effectiveness criteria and therefore constitutes a very useful support for decision makers.

  9. Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method

    NASA Astrophysics Data System (ADS)

    Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping

    2017-05-01

    Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m based on six transects of measured data. This study aimed at construction of an accurate DEM for a large-scale, high-variable zone within a short timespan based on an iterative way of the waterline method.

  10. Earthquake Response of Concrete Gravity Dams Including Hydrodynamic and Foundation Interaction Effects,

    DTIC Science & Technology

    1980-01-01

    standard procedure for Analysis of all types of civil engineering struc- tures. Early in its development, it became apparent that this method had...unique potentialities in the evaluation of stress in dams, and many of its earliest civil engineering applications concerned special problems associated...with such structures [3,4]. The earliest dynamic finite element analyses of civil engineering structures involved the earthquake response analysis of

  11. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  12. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  13. Flood hazards analysis based on changes of hydrodynamic processes in fluvial systems of Sao Paulo, Brazil.

    NASA Astrophysics Data System (ADS)

    Simas, Iury; Rodrigues, Cleide

    2016-04-01

    The metropolis of Sao Paulo, with its 7940 Km² and over 20 million inhabitants, is increasingly being consolidated with disregard for the dynamics of its fluvial systems and natural limitations imposed by fluvial terraces, floodplains and slopes. Events such as floods and flash floods became particularly persistent mainly in socially and environmentally vulnerable areas. The Aricanduva River basin was selected as the ideal area for the development of the flood hazard analysis since it presents the main geological and geomorphological features found in the urban site. According to studies carried out by Anthropic Geomorphology approach in São Paulo, to study this phenomenon is necessary to take into account the original hydromorphological systems and its functional conditions, as well as in which dimensions the Anthropic factor changes the balance between the main variables of surface processes. Considering those principles, an alternative model of geographical data was proposed and enabled to identify the role of different driving forces in terms of spatial conditioning of certain flood events. Spatial relationships between different variables, such as anthropogenic and original morphology, were analyzed for that purpose in addition to climate data. The surface hydrodynamic tendency spatial model conceived for this study takes as key variables: 1- The land use present at the observed date combined with the predominant lithological group, represented by a value ranging 0-100, based on indexes of the National Soil Conservation Service (NSCS-USA) and the Hydraulic Technology Center Foundation (FCTH-Brazil) to determine the resulting balance of runoff/infiltration. 2- The original slope, applying thresholds from which it's possible to determine greater tendency for runoff (in percents). 3- The minimal features of relief, combining the curvature of surface in plant and profile. Those three key variables were combined in a Geographic Information System in a series of tests to get weighted values, defining fuzzy limits in the resulting matrix. For comparison purposes, with this method it was possible to create surface hydrodynamic tendency charts of different periods of urban consolidation. Considerable changes of superficial hydrodynamic tendencies in our universe of study were identified, specially pointing to the expected positive tendency change for runoff, due to the current predominant urban land uses. Furthermore, the model enabled an associated analysis with interpolated pluvial values, pointing and quantifying, in terms of runoff volume increase, the influence of occupied areas to the occurrences of floods in areas previously not-known to be affected.

  14. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  15. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  16. Positivity-preserving High Order Finite Difference WENO Schemes for Compressible Euler Equations

    DTIC Science & Technology

    2011-07-15

    the WENO reconstruction. We assume that there is a polynomial vector qi(x) = (ρi(x), mi(x), Ei(x)) T with degree k which are (k + 1)-th order accurate...i+ 1 2 = qi(xi+ 1 2 ). The existence of such polynomials can be established by interpolation for WENO schemes. For example, for the fifth or- der...WENO scheme, there is a unique vector of polynomials of degree four qi(x) satisfying qi(xi− 1 2 ) = w+ i− 1 2 , qi(xi+ 1 2 ) = w− i+ 1 2 and 1 ∆x ∫ Ij qi

  17. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  18. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  19. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  20. A coupled ALE-AMR method for shock hydrodynamics

    DOE PAGES

    Waltz, J.; Bakosi, J.

    2018-03-05

    We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less

  1. A coupled ALE-AMR method for shock hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waltz, J.; Bakosi, J.

    We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less

  2. Large time behavior of entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductor devices

    NASA Astrophysics Data System (ADS)

    Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan

    2018-06-01

    We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.

  3. Nanofluid of graphene-based amphiphilic Janus nanosheets for tertiary or enhanced oil recovery: High performance at low concentration

    PubMed Central

    Luo, Dan; Wang, Feng; Zhu, Jingyi; Cao, Feng; Liu, Yuan; Li, Xiaogang; Willson, Richard C.; Yang, Zhaozhong; Chu, Ching-Wu; Ren, Zhifeng

    2016-01-01

    The current simple nanofluid flooding method for tertiary or enhanced oil recovery is inefficient, especially when used with low nanoparticle concentration. We have designed and produced a nanofluid of graphene-based amphiphilic nanosheets that is very effective at low concentration. Our nanosheets spontaneously approached the oil–water interface and reduced the interfacial tension in a saline environment (4 wt % NaCl and 1 wt % CaCl2), regardless of the solid surface wettability. A climbing film appeared and grew at moderate hydrodynamic condition to encapsulate the oil phase. With strong hydrodynamic power input, a solid-like interfacial film formed and was able to return to its original form even after being seriously disturbed. The film rapidly separated oil and water phases for slug-like oil displacement. The unique behavior of our nanosheet nanofluid tripled the best performance of conventional nanofluid flooding methods under similar conditions. PMID:27354529

  4. BIOMECHANICS. Jumping on water: Surface tension-dominated jumping of water striders and robotic insects.

    PubMed

    Koh, Je-Sung; Yang, Eunjin; Jung, Gwang-Pil; Jung, Sun-Pill; Son, Jae Hak; Lee, Sang-Im; Jablonski, Piotr G; Wood, Robert J; Kim, Ho-Young; Cho, Kyu-Jin

    2015-07-31

    Jumping on water is a unique locomotion mode found in semi-aquatic arthropods, such as water striders. To reproduce this feat in a surface tension-dominant jumping robot, we elucidated the hydrodynamics involved and applied them to develop a bio-inspired impulsive mechanism that maximizes momentum transfer to water. We found that water striders rotate the curved tips of their legs inward at a relatively low descending velocity with a force just below that required to break the water surface (144 millinewtons/meter). We built a 68-milligram at-scale jumping robotic insect and verified that it jumps on water with maximum momentum transfer. The results suggest an understanding of the hydrodynamic phenomena used by semi-aquatic arthropods during water jumping and prescribe a method for reproducing these capabilities in artificial systems. Copyright © 2015, American Association for the Advancement of Science.

  5. Hydrofocusing Bioreactor for Three-Dimensional Cell Culture

    NASA Technical Reports Server (NTRS)

    Gonda, Steve R.; Spaulding, Glenn F.; Tsao, Yow-Min D.; Flechsig, Scott; Jones, Leslie; Soehnge, Holly

    2003-01-01

    The hydrodynamic focusing bioreactor (HFB) is a bioreactor system designed for three-dimensional cell culture and tissue-engineering investigations on orbiting spacecraft and in laboratories on Earth. The HFB offers a unique hydrofocusing capability that enables the creation of a low-shear culture environment simultaneously with the "herding" of suspended cells, tissue assemblies, and air bubbles. Under development for use in the Biotechnology Facility on the International Space Station, the HFB has successfully grown large three-dimensional, tissuelike assemblies from anchorage-dependent cells and grown suspension hybridoma cells to high densities. The HFB, based on the principle of hydrodynamic focusing, provides the capability to control the movement of air bubbles and removes them from the bioreactor without degrading the low-shear culture environment or the suspended three-dimensional tissue assemblies. The HFB also provides unparalleled control over the locations of cells and tissues within its bioreactor vessel during operation and sampling.

  6. Nanofluid of graphene-based amphiphilic Janus nanosheets for tertiary or enhanced oil recovery: High performance at low concentration.

    PubMed

    Luo, Dan; Wang, Feng; Zhu, Jingyi; Cao, Feng; Liu, Yuan; Li, Xiaogang; Willson, Richard C; Yang, Zhaozhong; Chu, Ching-Wu; Ren, Zhifeng

    2016-07-12

    The current simple nanofluid flooding method for tertiary or enhanced oil recovery is inefficient, especially when used with low nanoparticle concentration. We have designed and produced a nanofluid of graphene-based amphiphilic nanosheets that is very effective at low concentration. Our nanosheets spontaneously approached the oil-water interface and reduced the interfacial tension in a saline environment (4 wt % NaCl and 1 wt % CaCl2), regardless of the solid surface wettability. A climbing film appeared and grew at moderate hydrodynamic condition to encapsulate the oil phase. With strong hydrodynamic power input, a solid-like interfacial film formed and was able to return to its original form even after being seriously disturbed. The film rapidly separated oil and water phases for slug-like oil displacement. The unique behavior of our nanosheet nanofluid tripled the best performance of conventional nanofluid flooding methods under similar conditions.

  7. Processing of airborne laser scanning data to generate accurate DTM for floodplain wetland

    NASA Astrophysics Data System (ADS)

    Szporak-Wasilewska, Sylwia; Mirosław-Świątek, Dorota; Grygoruk, Mateusz; Michałowski, Robert; Kardel, Ignacy

    2015-10-01

    Structure of the floodplain, especially its topography and vegetation, influences the overland flow and dynamics of floods which are key factors shaping ecosystems in surface water-fed wetlands. Therefore elaboration of the digital terrain model (DTM) of a high spatial accuracy is crucial in hydrodynamic flow modelling in river valleys. In this study the research was conducted in the unique Central European complex of fens and marshes - the Lower Biebrza river valley. The area is represented mainly by peat ecosystems which according to EU Water Framework Directive (WFD) are called "water-dependent ecosystems". Development of accurate DTM in these areas which are overgrown by dense wetland vegetation consisting of alder forest, willow shrubs, reed, sedges and grass is very difficult, therefore to represent terrain in high accuracy the airborne laser scanning data (ALS) with scanning density of 4 points/m2 was used and the correction of the "vegetation effect" on DTM was executed. This correction was performed utilizing remotely sensed images, topographical survey using the Real Time Kinematic positioning and vegetation height measurements. In order to classify different types of vegetation within research area the object based image analysis (OBIA) was used. OBIA allowed partitioning remotely sensed imagery into meaningful image-objects, and assessing their characteristics through spatial and spectral scale. The final maps of vegetation patches that include attributes of vegetation height and vegetation spectral properties, utilized both the laser scanning data and the vegetation indices developed on the basis of airborne and satellite imagery. This data was used in process of segmentation, attribution and classification. Several different vegetation indices were tested to distinguish different types of vegetation in wetland area. The OBIA classification allowed correction of the "vegetation effect" on DTM. The final digital terrain model was compared and examined within distinguished land cover classes (formed mainly by natural vegetation of the river valley) with archival height models developed through interpolation of ground points measured with GPS RTK and also with elevation models from the ASTER-GDEM and SRTM programs. The research presented in this paper allowed improving quality of hydrodynamic modelling in the surface water-fed wetlands protected within Biebrza National Park. Additionally, the comparison with other digital terrain models allowed to demonstrate the importance of accurate topography products in such modelling. The ALS data also significantly improved the accuracy and actuality of the river Biebrza course, its tributaries and location of numerous oxbows typical in this part of the river valley in comparison to previously available data. This type of data also helped to refine the river valley cross-sections, designate river banks and to develop the slope map of the research area.

  8. Coupled B-snake grids and constrained thin-plate splines for analysis of 2-D tissue deformations from tagged MRI.

    PubMed

    Amini, A A; Chen, Y; Curwen, R W; Mani, V; Sun, J

    1998-06-01

    Magnetic resonance imaging (MRI) is unique in its ability to noninvasively and selectively alter tissue magnetization and create tagged patterns within a deforming body such as the heart muscle. The resulting patterns define a time-varying curvilinear coordinate system on the tissue, which we track with coupled B-snake grids. B-spline bases provide local control of shape, compact representation, and parametric continuity. Efficient spline warps are proposed which warp an area in the plane such that two embedded snake grids obtained from two tagged frames are brought into registration, interpolating a dense displacement vector field. The reconstructed vector field adheres to the known displacement information at the intersections, forces corresponding snakes to be warped into one another, and for all other points in the plane, where no information is available, a C1 continuous vector field is interpolated. The implementation proposed in this paper improves on our previous variational-based implementation and generalizes warp methods to include biologically relevant contiguous open curves, in addition to standard landmark points. The methods are validated with a cardiac motion simulator, in addition to in-vivo tagging data sets.

  9. Use of loading-unloading compression curves in medical device design

    NASA Astrophysics Data System (ADS)

    Ciornei, M. C.; Alaci, S.; Ciornei, F. C.; Romanu, I. C.

    2017-08-01

    The paper presents a method and experimental results regarding mechanical testing of soft materials. In order to characterize the mechanical behaviour of technological materials used in prosthesis, a large number of material constants are required, as well as the comparison to the original. The present paper proposes as methodology the comparison between compression loading-unloading curves corresponding to a soft biological tissue and to a synthetic material. To this purpose, a device was designed based on the principle of the dynamic harness test. A moving load is considered and the force upon the indenter is controlled for loading-unloading phases. The load and specimen deformation are simultaneously recorded. A significant contribution of this paper is the interpolation of experimental data by power law functions, a difficult task because of the instability of the system of equations to be optimized. Finding the interpolation function was simplified, from solving a system of transcendental equations to solving a unique equation. The characteristic parameters of the experimentally curves must be compared to the ones corresponding to actual tissue. The tests were performed for two cases: first, using a spherical punch, and second, for a flat-ended cylindrical punch.

  10. The Global Modeling and Assimilation Office (GMAO) 4d-Var and its Adjoint-based Tools

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Tremolet, Yannick

    2008-01-01

    The fifth generation of the Goddard Earth Observing System (GEOS-5) Data Assimilation System (DAS) is a 3d-var system that uses the Grid-point Statistical Interpolation (GSI) system developed in collaboration with NCEP, and a general circulation model developed at Goddard, that includes the finite-volume hydrodynamics of GEOS-4 wrapped in the Earth System Modeling Framework and physical packages tuned to provide a reliable hydrological cycle for the integration of the Modern Era Retrospective-analysis for Research and Applications (MERRA). This MERRA system is essentially complete and the next generation GEOS is under intense development. A prototype next generation system is now complete and has been producing preliminary results. This prototype system replaces the GSI-based Incremental Analysis Update procedure with a GSI-based 4d-var which uses the adjoint of the finite-volume hydrodynamics of GEOS-4 together with a vertical diffusing scheme for simplified physics. As part of this development we have kept the GEOS-5 IAU procedure as an option and have added the capability to experiment with a First Guess at the Appropriate Time (FGAT) procedure, thus allowing for at least three modes of running the data assimilation experiments. The prototype system is a large extension of GEOS-5 as it also includes various adjoint-based tools, namely, a forecast sensitivity tool, a singular vector tool, and an observation impact tool, that combines the model sensitivity tool with a GSI-based adjoint tool. These features bring the global data assimilation effort at Goddard up to date with technologies used in data assimilation systems at major meteorological centers elsewhere. Various aspects of the next generation GEOS will be discussed during the presentation at the Workshop, and preliminary results will illustrate the discussion.

  11. [Spatial characteristics of grain size of surface sediments in mangrove wetlands in Gaoqiao of Zhanjiang, Guangdong province of South China].

    PubMed

    Zhu, Yao-Jun; Bourgeois, C; Lin, Guang-Xuan; Wu, Xiao-Dong; Guo, Ju-Lan; Guo, Zhi-Hua

    2012-08-01

    Mangrove wetland is an important type of coastal wetlands, and also, an important sediment trap. Sediment is an essential medium for mangrove recruitment and development, which records the environmental history of mangrove wetlands and can be used for the analysis of material sources and the inference of the materials depositing process, being essential to the ecological restoration and conservation of mangrove. In this paper, surface sediment samples were collected along a hydrodynamic gradient in Gaoqiao, Zhanjiang Mangrove National Nature Reserve in 2011. The characteristics of the surface sediments were analyzed based on grain size analysis, and the prediction surfaces were generated by the geo-statistical methods with ArcGIS 9.2 software. A correlation analysis was also conducted on the sediment organic matter content and the mangrove community structure. In the study area, clay and silt dominated the sediment texture, and the mean content of sand, silt, and clay was (27.8 +/- 15.4)%, (40.3 +/- 15.4)%, and (32.1 +/- 11.4)%, respectively. The spatial gradient of the sediment characteristics was expressed in apparent interpolation raster. With increasing distance from the seawall, the sediment sand content increased, clay content decreased, and silt content was relatively stable at a certain level. There was a positive correlation between the contents of sediment organic matter and silt, and a negative correlation between the contents of sediment organic matter and sand. Much more sediment organic matter was located at the high tide area with weak tide energy. There existed apparent discrepancies in the characteristics of the surface sediments in different biotopes. The sediment characteristics had definite correlations with the community structure of mangroves, reflecting the complicated correlations between the hydrodynamic conditions and the mangroves.

  12. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  13. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  14. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  15. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  16. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  17. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  18. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  19. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  20. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  1. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  2. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  3. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  4. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  5. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  6. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  7. On the feasibility to integrate low-cost MEMS accelerometers and GNSS receivers

    NASA Astrophysics Data System (ADS)

    Benedetti, Elisa; Dermanis, Athanasios; Crespi, Mattia

    2017-06-01

    The aim of this research was to investigate the feasibility of merging the benefits offered by low-cost GNSS and MEMS accelerometers technology, in order to promote the diffusion of low-cost monitoring solutions. A merging approach was set up at the level of the combination of kinematic results (velocities and displacements) coming from the two kinds of sensors, whose observations were separately processed, following to the so called loose integration, which sounds much more simple and flexible thinking about the possibility of an easy change of the combined sensors. At first, the issues related to the difference in reference systems, time systems and measurement rate and epochs for the two sensors were faced with. An approach was designed and tested to transform into unique reference and time systems the outcomes from GPS and MEMS and to interpolate the usually (much) more dense MEMS observation to common (GPS) epochs. The proposed approach was limited to time-independent (constant) orientation of the MEMS reference system with respect to the GPS one. Then, a data fusion approach based on the use of Discrete Fourier Transform and cubic splines interpolation was proposed both for velocities and displacements: MEMS and GPS derived solutions are firstly separated by a rectangular filter in spectral domain, and secondly back-transformed and combined through a cubic spline interpolation. Accuracies around 5 mm for slow and fast displacements and better than 2 mm/s for velocities were assessed. The obtained solution paves the way to a powerful and appealing use of low-cost single frequency GNSS receivers and MEMS accelerometers for structural and ground monitoring applications. Some additional remarks and prospects for future investigations complete the paper.

  8. Comparing cosmological hydrodynamic simulations with observations of high- redshift galaxy formation

    NASA Astrophysics Data System (ADS)

    Finlator, Kristian Markwart

    We use cosmological hydrodynamic simulations to study the impact of outflows and radiative feedback on high-redshift galaxies. For outflows, we consider simulations that assume (i) no winds, (ii) a "constant-wind" model in which the mass-loading factor and outflow speed are constant, and (iii) "momentum-driven" winds in which both parameters vary smoothly with mass. In order to treat radiative feedback, we develop a moment-based radiative transfer technique that operates in both post-processing and coupled radiative hydrodynamic modes. We first ask how outflows impact the broadband spectral energy distributions (SEDs) of six observed reionization-epoch galaxies. Simulations reproduce five regardless of the outflow prescription, while the sixth suggests an unusually bursty star formation history. We conclude that (i) simulations broadly account for available constraints on reionization-epoch galaxies, (ii) individual SEDs do not constrain outflows, and (iii) SED comparisons efficiently isolate objects that challenge simulations. We next study how outflows impact the galaxy mass metallicity relation (MZR). Momentum-driven outflows uniquely reproduce observations at z = 2. In this scenario, galaxies obey two equilibria: (i) The rate at which a galaxy processes gas into stars and outflows tracks its inflow rate; and (ii) The gas enrichment rate owing to star formation balances the dilution rate owing to inflows. Combining these conditions indicates that the MZR is dominated by the (instantaneous) variation of outflows with mass, with more-massive galaxies driving less gas into outflows per unit stellar mass formed. Turning to radiative feedback, we use post-processing simulations to study the topology of reionization. Reionization begins in overdensities and then "leaks" directly into voids, with filaments reionizing last owing to their high density and low emissivity. This result conflicts with previous findings that voids ionize last. We argue that it owes to the uniqely-biased emissivity field produced by our star formation prescriptions, which have previously been shown to reproduce numerous post-reionization constraints. Finally, preliminary results from coupled radiative hydrodynamic simulations indicate that reionization suppresses the star formation rate density by at most 10-20% by z = 5. This is much less than previous estimates, which we attribute to our unique reionization topology although confirmation will have to await more detailed modeling.

  9. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  10. High-efficiency single cell encapsulation and size selective capture of cells in picoliter droplets based on hydrodynamic micro-vortices.

    PubMed

    Kamalakshakurup, Gopakumar; Lee, Abraham P

    2017-12-05

    Single cell analysis has emerged as a paradigm shift in cell biology to understand the heterogeneity of individual cells in a clone for pathological interrogation. Microfluidic droplet technology is a compelling platform to perform single cell analysis by encapsulating single cells inside picoliter-nanoliter (pL-nL) volume droplets. However, one of the primary challenges for droplet based single cell assays is single cell encapsulation in droplets, currently achieved either randomly, dictated by Poisson statistics, or by hydrodynamic techniques. In this paper, we present an interfacial hydrodynamic technique which initially traps the cells in micro-vortices, and later releases them one-to-one into the droplets, controlled by the width of the outer streamline that separates the vortex from the flow through the streaming passage adjacent to the aqueous-oil interface (d gap ). One-to-one encapsulation is achieved at a d gap equal to the radius of the cell, whereas complete trapping of the cells is realized at a d gap smaller than the radius of the cell. The unique feature of this technique is that it can perform 1. high efficiency single cell encapsulations and 2. size-selective capturing of cells, at low cell loading densities. Here we demonstrate these two capabilities with a 50% single cell encapsulation efficiency and size selective separation of platelets, RBCs and WBCs from a 10× diluted blood sample (WBC capture efficiency at 70%). The results suggest a passive, hydrodynamic micro-vortex based technique capable of performing high-efficiency single cell encapsulation for cell based assays.

  11. Experimental study and discrete element method simulation of Geldart Group A particles in a small-scale fluidized bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Rabha, Swapna; Verma, Vikrant

    Geldart Group A particles are of great importance in various chemical processes because of advantages such as ease of fluidization, large surface area, and many other unique properties. It is very challenging to model the fluidization behavior of such particles as widely reported in the literature. In this study, a pseudo-2D experimental column with a width of 5 cm, a height of 45 cm, and a depth of 0.32 cm was developed for detailed measurements of fluidized bed hydrodynamics of fine particles to facilitate the validation of computational fluid dynamic (CFD) modeling. The hydrodynamics of sieved FCC particles (Sauter meanmore » diameter of 148 µm and density of 1300 kg/m3) and NETL-32D sorbents (Sauter mean diameter of 100 µm and density of 480 kg/m3) were investigated mainly through the visualization by a high-speed camera. Numerical simulations were then conducted by using NETL’s open source code MFIX-DEM. Both qualitative and quantitative information including bed expansion, bubble characteristics, and solid movement were compared between the numerical simulations and the experimental measurement. Furthermore, the cohesive van der Waals force was incorporated in the MFIX-DEM simulations and its influences on the flow hydrodynamics were studied.« less

  12. Ejection of Metal Particles into Superfluid 4He by Laser Ablation.

    PubMed

    Buelna, Xavier; Freund, Adam; Gonzalez, Daniel; Popov, Evgeny; Eloranta, Jussi

    2016-10-05

    The dynamics following laser ablation of a metal target immersed in superfluid $^4$He is studied by time-resolved shadowgraph photography. The delayed ejection of hot micrometer-sized particles from the target surface into the liquid was indirectly observed by monitoring the formation and growth of gaseous bubbles around the particles. The experimentally determined particle average velocity distribution appears similar as previously measured in vacuum but exhibits a sharp cutoff at the speed of sound of the liquid. The propagation of the subsonic particles terminates in slightly elongated non-spherical gas bubbles residing near the target whereas faster particles reveal an unusual hydrodynamic response of the liquid. Based on the previously established semi-empirical model developed for macroscopic objects, the ejected transonic particles exhibit supercavitating flow to reduce their hydrodynamic drag. Supersonic particles appear to follow a completely different propagation mechanism as they leave discrete and semi-continuous bubble trails in the liquid. The relatively low number density of the observed non-spherical gas bubbles indicates that only large micron-sized particles are visualized in the experiments. Although the unique properties of superfluid helium allow a detailed characterization of these processes, the developed technique can be used to study the hydrodynamic response of any liquid to fast propagating objects on the micrometer-scale.

  13. Radiation hydrodynamical instabilities in cosmological and galactic ionization fronts

    NASA Astrophysics Data System (ADS)

    Whalen, Daniel J.; Norman, Michael L.

    2011-11-01

    Ionization fronts, the sharp radiation fronts behind which H/He ionizing photons from massive stars and galaxies propagate through space, were ubiquitous in the universe from its earliest times. The cosmic dark ages ended with the formation of the first primeval stars and galaxies a few hundred Myr after the Big Bang. Numerical simulations suggest that stars in this era were very massive, 25-500 solar masses, with H(II) regions of up to 30,000 light-years in diameter. We present three-dimensional radiation hydrodynamical calculations that reveal that the I-fronts of the first stars and galaxies were prone to violent instabilities, enhancing the escape of UV photons into the early intergalactic medium (IGM) and forming clumpy media in which supernovae later exploded. The enrichment of such clumps with metals by the first supernovae may have led to the prompt formation of a second generation of low-mass stars, profoundly transforming the nature of the first protogalaxies. Cosmological radiation hydrodynamics is unique because ionizing photons coupled strongly to both gas flows and primordial chemistry at early epochs, introducing a hierarchy of disparate characteristic timescales whose relative magnitudes can vary greatly throughout a given calculation. We describe the adaptive multistep integration scheme we have developed for the self-consistent transport of both cosmological and galactic ionization fronts.

  14. Mating behaviour of Pseudodiaptomus annandalei (Copepoda, Calanoida) in calm and turbulent waters

    NASA Astrophysics Data System (ADS)

    Lee, C. H.; Dahms, H. U.; Cheng, S. H.; Souissi, S.; Schmitt, F. G.; Kumar, R.; Hwang, J. S.

    2009-04-01

    Behavioral observations of male copepods reveal that they commonly follow female copepods' footprints to find their mates. Female generated environmental signals are primarily of hydromechanical or chemical quality. The intensity of hydromechanical or chemical signals is affected by the hydrodynamic conditions which in turn may modulate a copepod's ability to sense signals in their search for mates in the aquatic environment. We studied the patterns and efficiency of the copepod Pseudodiaptomus annandalei to mate at still and turbulent water conditions during day and night and in different shape and volume experimental containers. The ability of courtship in P. annandalei was recorded to be a negative function of hydromechanical disturbances as the successful mating was observed in still water only. Under turbulent condition males were not able to track a female properly. We records in the present study that both, sequential and simultaneous taxi mechanisms are used by the male P. annandalei to follow either hydromechanic or chemical signals. Our results further reveal that males follow a signal more accurately characterized as a trail. The ability of P. annandalei males to track a three-dimensional trail appears unique, and possibly depends on the persistence of fluid-borne hydromechanical or chemical signals created in low Reynolds number hydrodynamic regimes. Keywords: Mating behavior, Turbulence, Flow, Hydrodynamic conditions

  15. Experimental study and discrete element method simulation of Geldart Group A particles in a small-scale fluidized bed

    DOE PAGES

    Li, Tingwen; Rabha, Swapna; Verma, Vikrant; ...

    2017-09-19

    Geldart Group A particles are of great importance in various chemical processes because of advantages such as ease of fluidization, large surface area, and many other unique properties. It is very challenging to model the fluidization behavior of such particles as widely reported in the literature. In this study, a pseudo-2D experimental column with a width of 5 cm, a height of 45 cm, and a depth of 0.32 cm was developed for detailed measurements of fluidized bed hydrodynamics of fine particles to facilitate the validation of computational fluid dynamic (CFD) modeling. The hydrodynamics of sieved FCC particles (Sauter meanmore » diameter of 148 µm and density of 1300 kg/m3) and NETL-32D sorbents (Sauter mean diameter of 100 µm and density of 480 kg/m3) were investigated mainly through the visualization by a high-speed camera. Numerical simulations were then conducted by using NETL’s open source code MFIX-DEM. Both qualitative and quantitative information including bed expansion, bubble characteristics, and solid movement were compared between the numerical simulations and the experimental measurement. Furthermore, the cohesive van der Waals force was incorporated in the MFIX-DEM simulations and its influences on the flow hydrodynamics were studied.« less

  16. General relativistic viscous hydrodynamics of differentially rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Shibata, Masaru; Kiuchi, Kenta; Sekiguchi, Yu-ichiro

    2017-04-01

    Employing a simplified version of the Israel-Stewart formalism for general-relativistic shear-viscous hydrodynamics, we perform axisymmetric general-relativistic simulations for a rotating neutron star surrounded by a massive torus, which can be formed from differentially rotating stars. We show that with our choice of a shear-viscous hydrodynamics formalism, the simulations can be stably performed for a long time scale. We also demonstrate that with a possibly high shear-viscous coefficient, not only viscous angular momentum transport works but also an outflow could be driven from a hot envelope around the neutron star for a time scale ≳100 ms with the ejecta mass ≳10-2 M⊙ , which is comparable to the typical mass for dynamical ejecta of binary neutron-star mergers. This suggests that massive neutron stars surrounded by a massive torus, which are typical outcomes formed after the merger of binary neutron stars, could be the dominant source for providing neutron-rich ejecta, if the effective shear viscosity is sufficiently high, i.e., if the viscous α parameter is ≳10-2. The present numerical result indicates the importance of a future high-resolution magnetohydrodynamics simulation that is the unique approach to clarify the viscous effect in the merger remnants of binary neutron stars by the first-principle manner.

  17. A new approach to the analysis of Type 1 non-uniqueness of the ITS-90 above 0 °C

    NASA Astrophysics Data System (ADS)

    Gaita, Sonia; Bonnier, Georges

    2018-04-01

    The Type 1 non-uniqueness (NU-1) is the difference between interpolated values at the same temperature in the resistance thermometer subranges of the International Temperature Scale of 1990 (ITS-90) that overlap. The paper argues for a method of evaluating the NU-1 at a given temperature which considers all subranges of the Scale that contain the respective temperature, not only combinations of two, and it proposes mathematical models to determine the values of NU-1 for temperatures above 0 °C. The paper demonstrates that NU-1 is not the right contributor to the uncertainty associated with the realisation of the ITS-90. Therefore, a new concept of Correction for the Type 1 non-uniqueness of the Scale, CNU-1, is introduced and its mathematical model is established. Also, the estimate of CNU-1 and its standard uncertainty are defined and they are assessed through statistical analysis. The values of standard uncertainty determined by the novel methodology do not exceed 0.26 mK and they are smaller than the values given in the specific Guides developed by the Consultative Committee for Thermometry. The proposed models allow authors to single out and analyse the factors that generate Type 1 non-uniqueness of the Scale and influence its value.

  18. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  19. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  1. Techniques for Accurate Sizing of Gold Nanoparticles Using Dynamic Light Scattering with Particular Application to Chemical and Biological Sensing Based on Aggregate Formation.

    PubMed

    Zheng, Tianyu; Bott, Steven; Huo, Qun

    2016-08-24

    Gold nanoparticles (AuNPs) have found broad applications in chemical and biological sensing, catalysis, biomolecular imaging, in vitro diagnostics, cancer therapy, and many other areas. Dynamic light scattering (DLS) is an analytical tool used routinely for nanoparticle size measurement and analysis. Due to its relatively low cost and ease of operation in comparison to other more sophisticated techniques, DLS is the primary choice of instrumentation for analyzing the size and size distribution of nanoparticle suspensions. However, many DLS users are unfamiliar with the principles behind the DLS measurement and are unware of some of the intrinsic limitations as well as the unique capabilities of this technique. The lack of sufficient understanding of DLS often leads to inappropriate experimental design and misinterpretation of the data. In this study, we performed DLS analyses on a series of citrate-stabilized AuNPs with diameters ranging from 10 to 100 nm. Our study shows that the measured hydrodynamic diameters of the AuNPs can vary significantly with concentration and incident laser power. The scattered light intensity of the AuNPs has a nearly sixth order power law increase with diameter, and the enormous scattered light intensity of AuNPs with diameters around or exceeding 80 nm causes a substantial multiple scattering effect in conventional DLS instruments. The effect leads to significant errors in the reported average hydrodynamic diameter of the AuNPs when the measurements are analyzed in the conventional way, without accounting for the multiple scattering. We present here some useful methods to obtain the accurate hydrodynamic size of the AuNPs using DLS. We also demonstrate and explain an extremely powerful aspect of DLS-its exceptional sensitivity in detecting gold nanoparticle aggregate formation, and the use of this unique capability for chemical and biological sensing applications.

  2. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  3. Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media

    DTIC Science & Technology

    2015-09-24

    TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR

  4. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  5. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  6. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  7. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  8. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  9. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  10. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  11. A Computational Study of the Hydrodynamics in the Nasal Region of a Hammerhead Shark (Sphyrna tudes): Implications for Olfaction

    PubMed Central

    Rygg, Alex D.; Cox, Jonathan P. L.; Abel, Richard; Webb, Andrew G.; Smith, Nadine B.; Craven, Brent A.

    2013-01-01

    The hammerhead shark possesses a unique head morphology that is thought to facilitate enhanced olfactory performance. The olfactory chambers, located at the distal ends of the cephalofoil, contain numerous lamellae that increase the surface area for olfaction. Functionally, for the shark to detect chemical stimuli, water-borne odors must reach the olfactory sensory epithelium that lines these lamellae. Thus, odorant transport from the aquatic environment to the sensory epithelium is the first critical step in olfaction. Here we investigate the hydrodynamics of olfaction in Sphyrna tudes based on an anatomically-accurate reconstruction of the head and olfactory chamber from high-resolution micro-CT and MRI scans of a cadaver specimen. Computational fluid dynamics simulations of water flow in the reconstructed model reveal the external and internal hydrodynamics of olfaction during swimming. Computed external flow patterns elucidate the occurrence of flow phenomena that result in high and low pressures at the incurrent and excurrent nostrils, respectively, which induces flow through the olfactory chamber. The major (prenarial) nasal groove along the cephalofoil is shown to facilitate sampling of a large spatial extent (i.e., an extended hydrodynamic “reach”) by directing oncoming flow towards the incurrent nostril. Further, both the major and minor nasal grooves redirect some flow away from the incurrent nostril, thereby limiting the amount of fluid that enters the olfactory chamber. Internal hydrodynamic flow patterns are also revealed, where we show that flow rates within the sensory channels between olfactory lamellae are passively regulated by the apical gap, which functions as a partial bypass for flow in the olfactory chamber. Consequently, the hammerhead shark appears to utilize external (major and minor nasal grooves) and internal (apical gap) flow regulation mechanisms to limit water flow between the olfactory lamellae, thus protecting these delicate structures from otherwise high flow rates incurred by sampling a larger area. PMID:23555780

  12. Coevolution of hydrodynamics, vegetation and channel evolution in wetlands of a semi-arid floodplain

    NASA Astrophysics Data System (ADS)

    Seoane, Manuel; Rodriguez, Jose Fernando; Rojas, Steven Sandi; Saco, Patricia Mabel; Riccardi, Gerardo; Saintilan, Neil; Wen, Li

    2015-04-01

    The Macquarie Marshes are located in the semi-arid region in north western NSW, Australia, and constitute part of the northern Murray-Darling Basin. The Marshes are comprised of a system of permanent and semi-permanent marshes, swamps and lagoons interconnected by braided channels. The wetland complex serves as nesting place and habitat for many species of water birds, fish, frogs and crustaceans, and portions of the Marshes was listed as internationally important under the Ramsar Convention. Some of the wetlands have undergone degradation over the last four decades, which has been attributed to changes in flow management upstream of the marshes. Among the many characteristics that make this wetland system unique is the occurrence of channel breakdown and channel avulsion, which are associated with decline of river flow in the downstream direction typical of dryland streams. Decrease in river flow can lead to sediment deposition, decrease in channel capacity, vegetative invasion of the channel, overbank flows, and ultimately result in channel breakdown and changes in marsh formation. A similar process on established marshes may also lead to channel avulsion and marsh abandonment, with the subsequent invasion of terrestrial vegetation. All the previous geomorphological evolution processes have an effect on the established ecosystem, which will produce feedbacks on the hydrodynamics of the system and affect the geomorphology in return. In order to simulate the complex dynamics of the marshes we have developed an ecogeomorphological modelling framework that combines hydrodynamic, vegetation and channel evolution modules and in this presentation we provide an update on the status of the model. The hydrodynamic simulation provides spatially distributed values of inundation extent, duration, depth and recurrence to drive a vegetation model based on species preference to hydraulic conditions. It also provides velocities and shear stresses to assess geomorphological changes. Regular updates of stream network, floodplain surface elevations and vegetation coverage provide feedbacks to the hydrodynamic model.

  13. Head related transfer functions measurement and processing for the purpose of creating a spatial sound environment

    NASA Astrophysics Data System (ADS)

    Pec, Michał; Bujacz, Michał; Strumiłło, Paweł

    2008-01-01

    The use of Head Related Transfer Functions (HRTFs) in audio processing is a popular method of obtaining spatialized sound. HRTFs describe disturbances caused in the sound wave by the human body, especially by head and the ear pinnae. Since these shapes are unique, HRTFs differ greatly from person to person. For this reason measurement of personalized HRTFs is justified. Measured HRTFs also need further processing to be utilized in a system producing spatialized sound. This paper describes a system designed for efficient collecting of Head Related Transfer Functions as well as the measurement, interpolation and verification procedures.

  14. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  15. On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians

    NASA Astrophysics Data System (ADS)

    Valverde, Clodoaldo; Baseia, Basílio

    2018-01-01

    We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.

  16. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  17. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  18. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  19. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  20. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  1. Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations

    DTIC Science & Technology

    2008-02-01

    Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates

  2. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  3. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  4. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  5. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  6. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  7. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  8. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  9. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  10. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  11. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  12. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  13. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  14. Inertial focusing of microparticles and its limitations

    NASA Astrophysics Data System (ADS)

    Cruz, FJ; Hooshmand Zadeh, S.; Wu, ZG; Hjort, K.

    2016-10-01

    Microfluidic devices are useful tools for healthcare, biological and chemical analysis and materials synthesis amongst fields that can benefit from the unique physics of these systems. In this paper we studied inertial focusing as a tool for hydrodynamic sorting of particles by size. Theory and experimental results are provided as a background for a discussion on how to extend the technology to submicron particles. Different geometries and dimensions of microchannels were designed and simulation data was compared to the experimental results.

  15. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  16. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  17. Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Niu, Xiamu

    2016-01-01

    This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  19. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  20. Hurricane Harvey rapid response: observations of infragravity wave dynamics and morphological change during inundation of a barrier island cut

    NASA Astrophysics Data System (ADS)

    Anarde, K.; Figlus, J.; Dellapenna, T. M.; Bedient, P. B.

    2017-12-01

    Prior to landfall of Hurricane Harvey on August 25, 2017, instrumentation was deployed on the seaward and landward sides of a barrier island on the central Texas Gulf Coast to collect in-situ hydrodynamic measurements during storm impact. High-resolution devices capable of withstanding extreme conditions included inexpensive pressure transducers and tilt current meters mounted within and atop (respectively) shallow monitoring wells. In order to link measurements of storm hydrodynamics with the morphological evolution of the barrier, pre- and post-storm digital elevation models were generated using a combination of unmanned aerial imagery, LiDAR, and real-time kinematic GPS. Push-cores were collected and analyzed for grain size and sedimentary structure to relate hydrodynamic observations with the local character of storm-generated deposits. Observations show that at Hog Island, located approximately 160 miles northeast of Harvey's landfall location, storm surge inundated an inactive storm channel. Infragravity waves (0.003 - 0.05 Hz) dominated the water motion onshore of the berm crest over a 24-hour period proximate to storm landfall. Over this time, approximately 50 cm of sediment accreted vertically atop the instrument located in the backshore. Storm deposits at this location contained sub-parallel alternating laminae of quartz and heavy mineral-enriched sand. While onshore progression of infragravity waves into the back-barrier was observed over several hours prior to storm landfall, storm deposits in the back-barrier lack the characteristic laminae preserved in the backshore. These field measurements will ultimately be used to constrain and validate numerical modeling schemes that explore morphodynamic conditions of barriers in response to extreme storms (e.g., XBeach, CSHORE). This study provides a unique data set linking extreme storm hydrodynamics with geomorphic changes during a relatively low surge, but highly dissipative wave event.

  1. Efficacy of hydrodynamic interleukin 10 gene transfer in human liver segments with interest in transplantation.

    PubMed

    Sendra Gisbert, Luis; Miguel Matas, Antonio; Sabater Ortí, Luis; Herrero, María José; Sabater Olivas, Laura; Montalvá Orón, Eva María; Frasson, Matteo; Abargues López, Rafael; López-Andújar, Rafael; García-Granero Ximénez, Eduardo; Aliño Pellicer, Salvador Francisco

    2017-01-01

    Different diseases lead, during their advanced stages, to chronic or acute liver failure, whose unique treatment consists in organ transplantation. The success of intervention is limited by host immune response and graft rejection. The use of immunosuppressant drugs generally improve organ transplantation, but they cannot completely solve the problem. Also, their management is delicate, especially during the early stages of treatment. Thus, new tools to set an efficient modulation of immune response are required. The local expression of interleukin (IL) 10 protein in transplanted livers mediated by hydrodynamic gene transfer could improve the organ acceptance by the host because it presents the natural ability to modulate the immune response at different levels. In the organ transplantation scenario, IL10 has already demonstrated positive effects on graft tolerance. Hydrodynamic gene transfer has been proven to be safe and therapeutically efficient in animal models and could be easily moved to the clinic. In the present work, we evaluated efficacy of human IL10 gene transfer in human liver segments and the tissue natural barriers for gene entry into the cell, employing gold nanoparticles. In conclusion, the present work shows for the first time that hydrodynamic IL10 gene transfer to human liver segments ex vivo efficiently delivers a human gene into the cells. Indexes of tissue protein expression achieved could mediate local pharmacological effects with interest in controlling the immune response triggered after liver transplantation. On the other hand, the ultrastructural study suggests that the solubilized plasmid could access the hepatocyte in a passive manner mediated by the hydric flow and that an active mechanism of transportation could facilitate its entry into the nucleus. Liver Transplantation 23:50-62 2017 AASLD. © 2016 by the American Association for the Study of Liver Diseases.

  2. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  3. Estimating impacts of plantation forestry on plant biodiversity in southern Chile-a spatially explicit modelling approach.

    PubMed

    Braun, Andreas Christian; Koch, Barbara

    2016-10-01

    Monitoring the impacts of land-use practices is of particular importance with regard to biodiversity hotspots in developing countries. Here, conserving the high level of unique biodiversity is challenged by limited possibilities for data collection on site. Especially for such scenarios, assisting biodiversity assessments by remote sensing has proven useful. Remote sensing techniques can be applied to interpolate between biodiversity assessments taken in situ. Through this approach, estimates of biodiversity for entire landscapes can be produced, relating land-use intensity to biodiversity conditions. Such maps are a valuable basis for developing biodiversity conservation plans. Several approaches have been published so far to interpolate local biodiversity assessments in remote sensing data. In the following, a new approach is proposed. Instead of inferring biodiversity using environmental variables or the variability of spectral values, a hypothesis-based approach is applied. Empirical knowledge about biodiversity in relation to land-use is formalized and applied as ascription rules for image data. The method is exemplified for a large study site (over 67,000 km(2)) in central Chile, where forest industry heavily impacts plant diversity. The proposed approach yields a coefficient of correlation of 0.73 and produces a convincing estimate of regional biodiversity. The framework is broad enough to be applied to other study sites.

  4. Noninvasive coronary artery angiography using electron beam computed tomography

    NASA Astrophysics Data System (ADS)

    Rumberger, John A.; Rensing, Benno J.; Reed, Judd E.; Ritman, Erik L.; Sheedy, Patrick F., II

    1996-04-01

    Electron beam computed tomography (EBCT), also known as ultrafast-CT or cine-CT, uses a unique scanning architecture which allows for multiple high spatial resolution electrocardiographic triggered images of the beating heart. A recent study has demonstrated the feasibility of qualitative comparisons between EBCT derived 3D coronary angiograms and invasive angiography. Stenoses of the proximal portions of the left anterior descending and right coronary arteries were readily identified, but description of atherosclerotic narrowing in the left circumflex artery (and distal epicardial disease) was not possible with any degree of confidence. Although these preliminary studies support the notion that this approach has potential, the images overall were suboptimal for clinical application as an adjunct to invasive angiography. Furthermore, these studies did not examine different methods of EBCT scan acquisition, tomographic slice thicknesses, extent of scan overlap, or other segmentation, thresholding, and interpolation algorithms. Our laboratory has initiated investigation of these aspects and limitations of EBCT coronary angiography. Specific areas of research include defining effects of cardiac orientation; defining the effects of tomographic slice thickness and intensity (gradient) versus positional (shaped based) interpolation; and defining applicability of imaging each of the major epicardial coronary arteries for quantitative definition of vessel size, cross-sectional area, taper, and discrete vessel narrowing.

  5. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  6. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  7. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  8. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  9. A novel microfluidic mixer based on dual-hydrodynamic focusing for interrogating the kinetics of DNA-protein interaction.

    PubMed

    Li, Ying; Xu, Fei; Liu, Chao; Xu, Youzhi; Feng, Xiaojun; Liu, Bi-Feng

    2013-08-21

    Kinetic measurement of biomacromolecular interaction plays a significant role in revealing the underlying mechanisms of cellular activities. Due to the small diffusion coefficient of biomacromolecules, it is difficult to resolve the rapid kinetic process with traditional analytical methods such as stopped-flow or laminar mixers. Here, we demonstrated a unique continuous-flow laminar mixer based on microfluidic dual-hydrodynamic focusing to characterize the kinetics of DNA-protein interactions. The time window of this mixer for kinetics observation could cover from sub-milliseconds to seconds, which made it possible to capture the folding process with a wide dynamic range. Moreover, the sample consumption was remarkably reduced to <0.55 μL min⁻¹, over 1000-fold saving in comparison to those reported previously. We further interrogated the interaction kinetics of G-quadruplex and the single-stranded DNA binding protein, indicating that this novel micromixer would be a useful approach for analyzing the interaction kinetics of biomacromolecules.

  10. Dynamic modeling and motion simulation for a winged hybrid-driven underwater glider

    NASA Astrophysics Data System (ADS)

    Wang, Shu-Xin; Sun, Xiu-Jun; Wang, Yan-Hui; Wu, Jian-Guo; Wang, Xiao-Ming

    2011-03-01

    PETREL, a winged hybrid-driven underwater glider is a novel and practical marine survey platform which combines the features of legacy underwater glider and conventional AUV (autonomous underwater vehicle). It can be treated as a multi-rigid-body system with a floating base and a particular hydrodynamic profile. In this paper, theorems on linear and angular momentum are used to establish the dynamic equations of motion of each rigid body and the effect of translational and rotational motion of internal masses on the attitude control are taken into consideration. In addition, due to the unique external shape with fixed wings and deflectable rudders and the dual-drive operation in thrust and glide modes, the approaches of building dynamic model of conventional AUV and hydrodynamic model of submarine are introduced, and the tailored dynamic equations of the hybrid glider are formulated. Moreover, the behaviors of motion in glide and thrust operation are analyzed based on the simulation and the feasibility of the dynamic model is validated by data from lake field trials.

  11. Sizing protein-templated gold nanoclusters by time resolved fluorescence anisotropy decay measurements

    NASA Astrophysics Data System (ADS)

    Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe

    2018-03-01

    Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation.

  12. On-demand control of microfluidic flow via capillary-tuned solenoid microvalve suction.

    PubMed

    Zhang, Qiang; Zhang, Peiran; Su, Yetian; Mou, Chunbo; Zhou, Teng; Yang, Menglong; Xu, Jian; Ma, Bo

    2014-12-21

    A simple, low-cost and on-demand microfluidic flow controlling platform was developed based on a unique capillary-tuned solenoid microvalve suction effect without any outer pressure source. The suction effect was innovatively employed as a stable and controllable driving force for the manipulation of the microfluidic system by connecting a piece of capillary between the microvalve and the microfluidic chip, which caused significant hydrodynamic resistance differences among the solenoid valve ports and changed the flowing mode inside the valve. The volume of sucked liquid could be controlled from microliters even down to picoliters either by decreasing the valve energized duration (from a maximum energized duration to the valve response time of 20 ms) or by increasing the inserted capillary length (i.e., its hydrodynamic resistance). Several important microfluidic unit operations such as cell/droplet sorting and on-demand size-controllable droplet generation have been demonstrated on the developed platform and both simulations and experiments confirmed that this platform has good controllability and stability.

  13. Hydrodynamics on Supercomputers: Interacting Binary Stars

    NASA Astrophysics Data System (ADS)

    Blondin, J. M.

    1997-05-01

    The interaction of close binary stars accounts for a wide variety of peculiar objects scattered throughout our Galaxy. The unique features of Algols, Symbiotics, X-ray binaries, cataclysmic variables and many others are linked to the dynamics of the circumstellar gas which can take forms from tidal streams and accretion disks to colliding stellar winds. As in many other areas of astrophysics, large scale computing has provided a powerful new tool in the study of interacting binaries. In the research to be described, hydrodynamic simulations are used to create a "laboratory", within which one can "experiment": change the system and observe (and predict) the effects of those changes. This type of numerical experimentation, when buttressed by analytic studies, provides a means of interpreting observations, identifying and understanding the relevant physics, and visualizing the physical system. The results of such experiments will be shown, including the structure of tidal streams in Roche lobe overflow systems, mass accretion in X-ray binaries, and the formation of accretion disks.

  14. Computational Modeling of Hydrodynamics and Scour around Underwater Munitions

    NASA Astrophysics Data System (ADS)

    Liu, X.; Xu, Y.

    2017-12-01

    Munitions deposited in water bodies are a big threat to human health, safety, and environment. It is thus imperative to predict the motion and the resting status of the underwater munitions. A multitude of physical processes are involved, which include turbulent flows, sediment transport, granular material mechanics, 6 degree-of-freedom motion of the munition, and potential liquefaction. A clear understanding of this unique physical setting is currently lacking. Consequently, it is extremely hard to make reliable predictions. In this work, we present the computational modeling of two importance processes, i.e., hydrodynamics and scour, around munition objects. Other physical processes are also considered in our comprehensive model. However, they are not shown in this talk. To properly model the dynamics of the deforming bed and the motion of the object, an immersed boundary method is implemented in the open source CFD package OpenFOAM. Fixed bed and scour cases are simulated and compared with laboratory experiments. The future work of this project will implement the coupling between all the physical processes.

  15. Hydrodynamic aspects of thrust generation in gymnotiform swimming

    NASA Astrophysics Data System (ADS)

    Shirgaonkar, Anup A.; Curet, Oscar M.; Patankar, Neelesh A.; Maciver, Malcolm A.

    2008-11-01

    The primary propulsor in gymnotiform swimmers is a fin running along most of the ventral midline of the fish. The fish propagates traveling waves along this ribbon fin to generate thrust. This unique mode of thrust generation gives these weakly electric fish great maneuverability cluttered spaces. To understand the mechanical basis of gymnotiform propulsion, we investigated the hydrodynamics of a model ribbon-fin of an adult black ghost knifefish using high-resolution numerical experiments. We found that the principal mechanism of thrust generation is a central jet imparting momentum to the fluid with associated vortex rings near the free edge of the fin. The high-fidelity simulations also reveal secondary vortex rings potentially useful in rapid sideways maneuvers. We obtained the scaling of thrust with respect to the traveling wave kinematic parameters. Using a fin-plate model for a fish, we also discuss improvements to Lighthill's inviscid theory for gymnotiform and balistiform modes in terms of thrust magnitude, viscous drag on the body, and momentum enhancement.

  16. Spatial interpolation techniques using R

    EPA Science Inventory

    Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...

  17. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  18. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  19. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  20. Analysis of the numerical differentiation formulas of functions with large gradients

    NASA Astrophysics Data System (ADS)

    Tikhovskaya, S. V.

    2017-10-01

    The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.

  1. Hydrodynamic metamaterials: Microfabricated arrays to steer, refract, and focus streams of biomaterials

    PubMed Central

    Morton, Keith J.; Loutherback, Kevin; Inglis, David W.; Tsui, Ophelia K.; Sturm, James C.; Chou, Stephen Y.; Austin, Robert H.

    2008-01-01

    We show that it is possible to direct particles entrained in a fluid along trajectories much like rays of light in classical optics. A microstructured, asymmetric post array forms the core hydrodynamic element and is used as a building block to construct microfluidic metamaterials and to demonstrate refractive, focusing, and dispersive pathways for flowing beads and cells. The core element is based on the concept of deterministic lateral displacement where particles choose different paths through the asymmetric array based on their size: Particles larger than a critical size are displaced laterally at each row by a post and move along the asymmetric axis at an angle to the flow, while smaller particles move along streamline paths. We create compound elements with complex particle handling modes by tiling this core element using multiple transformation operations; we show that particle trajectories can be bent at an interface between two elements and that particles can be focused into hydrodynamic jets by using a single inlet port. Although particles propagate through these elements in a way that strongly resembles light rays propagating through optical elements, there are unique differences in the paths of our particles as compared with photons. The unusual aspects of these modular, microfluidic metamaterials form a rich design toolkit for mixing, separating, and analyzing cells and functional beads on-chip. PMID:18495920

  2. Effects of Flame Structure and Hydrodynamics on Soot Particle Inception and Flame Extinction in Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Axelbaum, R. L.; Chen, R.; Sunderland, P. B.; Urban, D. L.; Liu, S.; Chao, B. H.

    2001-01-01

    This paper summarizes recent studies of the effects of stoichiometric mixture fraction (structure) and hydrodynamics on soot particle inception and flame extinction in diffusion flames. Microgravity experiments are uniquely suited for these studies because, unlike normal gravity experiments, they allow structural and hydrodynamic effects to be independently studied. As part of this recent flight definition program, microgravity studies have been performed in the 2.2 second drop tower. Normal gravity counterflow studies also have been employed and analytical and numerical models have been developed. A goal of this program is to develop sufficient understanding of the effects of flame structure that flames can be "designed" to specifications - consequently, the program name Flame Design. In other words, if a soot-free, strong, low temperature flame is required, can one produce such a flame by designing its structure? Certainly, as in any design, there will be constraints imposed by the properties of the available "materials." For hydrocarbon combustion, the base materials are fuel and air. Additives could be considered, but for this work only fuel, oxygen and nitrogen are considered. Also, the structure of these flames is "designed" by varying the stoichiometric mixture fraction. Following this line of reasoning, the studies described are aimed at developing the understanding of flame structure that is needed to allow for optimum design.

  3. A hydrodynamically active flipper-stroke in humpback whales.

    PubMed

    Segre, Paolo S; Seakamela, S Mduduzi; Meÿer, Michael A; Findlay, Ken P; Goldbogen, Jeremy A

    2017-07-10

    A central paradigm of aquatic locomotion is that cetaceans use fluke strokes to power their swimming while relying on lift and torque generated by the flippers to perform maneuvers such as rolls, pitch changes and turns [1]. Compared to other cetaceans, humpback whales (Megaptera novaeangliae) have disproportionately large flippers with added structural features to aid in hydrodynamic performance [2,3]. Humpbacks use acrobatic lunging maneuvers to attack dense aggregations of krill or small fish, and their large flippers are thought to increase their maneuverability and thus their ability to capture prey. Immediately before opening their mouths, humpbacks will often rapidly move their flippers, and it has been hypothesized that this movement is used to corral prey [4,5] or to generate an upward pitching moment to counteract the torque caused by rapid water engulfment [6]. Here, we demonstrate an additional function for the rapid flipper movement during lunge feeding: the flippers are flapped using a complex, hydrodynamically active stroke to generate lift and increase propulsive thrust. We estimate that humpback flipper-strokes are capable of producing large forward oriented forces, which may be used to enhance lunge feeding performance. This behavior is the first observation of a lift-generating flipper-stroke for propulsion cetaceans and provides an additional function for the uniquely shaped humpback whale flipper. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  5. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    NASA Astrophysics Data System (ADS)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  6. Application of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.

    1990-01-01

    A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.

  7. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  8. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    NASA Astrophysics Data System (ADS)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  9. Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.

    PubMed

    Prochazka, Ivan; Panek, Petr

    2009-07-01

    A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.

  10. A pressure relaxation closure model for one-dimensional, two-material Lagrangian hydrodynamics based on the Riemann problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R; Shashkov, Mikhail J

    2009-01-01

    Despite decades of development, Lagrangian hydrodynamics of strengthfree materials presents numerous open issues, even in one dimension. We focus on the problem of closing a system of equations for a two-material cell under the assumption of a single velocity model. There are several existing models and approaches, each possessing different levels of fidelity to the underlying physics and each exhibiting unique features in the computed solutions. We consider the case in which the change in heat in the constituent materials in the mixed cell is assumed equal. An instantaneous pressure equilibration model for a mixed cell can be cast asmore » four equations in four unknowns, comprised of the updated values of the specific internal energy and the specific volume for each of the two materials in the mixed cell. The unique contribution of our approach is a physics-inspired, geometry-based model in which the updated values of the sub-cell, relaxing-toward-equilibrium constituent pressures are related to a local Riemann problem through an optimization principle. This approach couples the modeling problem of assigning sub-cell pressures to the physics associated with the local, dynamic evolution. We package our approach in the framework of a standard predictor-corrector time integration scheme. We evaluate our model using idealized, two material problems using either ideal-gas or stiffened-gas equations of state and compare these results to those computed with the method of Tipton and with corresponding pure-material calculations.« less

  11. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  12. Applications of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.

    1990-01-01

    A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.

  13. A 1D-2D coupled SPH-SWE model applied to open channel flow simulations in complicated geometries

    NASA Astrophysics Data System (ADS)

    Chang, Kao-Hua; Sheu, Tony Wen-Hann; Chang, Tsang-Jung

    2018-05-01

    In this study, a one- and two-dimensional (1D-2D) coupled model is developed to solve the shallow water equations (SWEs). The solutions are obtained using a Lagrangian meshless method called smoothed particle hydrodynamics (SPH) to simulate shallow water flows in converging, diverging and curved channels. A buffer zone is introduced to exchange information between the 1D and 2D SPH-SWE models. Interpolated water discharge values and water surface levels at the internal boundaries are prescribed as the inflow/outflow boundary conditions in the two SPH-SWE models. In addition, instead of using the SPH summation operator, we directly solve the continuity equation by introducing a diffusive term to suppress oscillations in the predicted water depth. The performance of the two approaches in calculating the water depth is comprehensively compared through a case study of a straight channel. Additionally, three benchmark cases involving converging, diverging and curved channels are adopted to demonstrate the ability of the proposed 1D and 2D coupled SPH-SWE model through comparisons with measured data and predicted mesh-based numerical results. The proposed model provides satisfactory accuracy and guaranteed convergence.

  14. Object Interpolation in Three Dimensions

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.

    2005-01-01

    Perception of objects in ordinary scenes requires interpolation processes connecting visible areas across spatial gaps. Most research has focused on 2-D displays, and models have been based on 2-D, orientation-sensitive units. The authors present a view of interpolation processes as intrinsically 3-D and producing representations of contours and…

  15. Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.

    PubMed

    Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik

    2007-01-01

    In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.

  16. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  17. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  18. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  19. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  20. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  1. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  2. Impacts of historic morphology and sea level rise on tidal hydrodynamics in a microtidal estuary (Grand Bay, Mississippi)

    NASA Astrophysics Data System (ADS)

    Passeri, Davina L.; Hagen, Scott C.; Medeiros, Stephen C.; Bilskie, Matthew V.

    2015-12-01

    This study evaluates the geophysical influence of the combined effects of historic sea level rise (SLR) and morphology on tidal hydrodynamics in the Grand Bay estuary, located in the Mississippi Sound. Since 1848, the landscape of the Mississippi Sound has been significantly altered as a result of natural and anthropogenic factors including the migration of the offshore Mississippi-Alabama (MSAL) barrier islands and the construction of navigational channels. As a result, the Grand Bay estuary has undergone extensive erosion resulting in the submergence of its protective barrier island, Grand Batture. A large-domain hydrodynamic model was used to simulate present (circa 2005) and past conditions (circa 1848, 1917, and 1960) with unique sea levels, bathymetry, topography and shorelines representative of each time period. Additionally, a hypothetical scenario was performed in which Grand Batture Island exists under 2005 conditions in order to observe the influence of the island on tidal hydrodynamics within the Grand Bay estuary. Changes in tidal amplitudes from the historic conditions varied. Within the Sound, tidal amplitudes were unaltered due to the open exposed shoreline; however, in semi-enclosed embayments outside of the Sound, tidal amplitudes increased. In addition, harmonic constituent phases were slower historically. The position of the MSAL barrier island inlets influenced tidal currents within the Sound; the westward migration of Petit Bois Island allowed stronger tidal velocities to be centered on the Grand Batture Island. Maximum tidal velocities within the Grand Bay estuary were 5 cm/s faster historically, and reversed from being flood dominant in 1848 to ebb dominant in 2005. If the Grand Batture Island was reconstructed under 2005 conditions, tidal amplitudes and phases would not be altered, indicating that the offshore MSAL barrier islands and SLR have a greater influence on these tidal parameters within the estuary. However, maximum tidal velocities would increase by as much as 5 cm/s (63%) and currents would become more ebb dominant. Results of this study illustrate the hydrodynamic response of the system to SLR and the changing landscape, and provide insight into potential future changes under SLR and barrier island evolution.

  3. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  4. Constructing polyatomic potential energy surfaces by interpolating diabatic Hamiltonian matrices with demonstration on green fluorescent protein chromophore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jae Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr; Department of Chemistry, Pohang University of Science and Technology

    2014-04-28

    Simulating molecular dynamics directly on quantum chemically obtained potential energy surfaces is generally time consuming. The cost becomes overwhelming especially when excited state dynamics is aimed with multiple electronic states. The interpolated potential has been suggested as a remedy for the cost issue in various simulation settings ranging from fast gas phase reactions of small molecules to relatively slow condensed phase dynamics with complex surrounding. Here, we present a scheme for interpolating multiple electronic surfaces of a relatively large molecule, with an intention of applying it to studying nonadiabatic behaviors. The scheme starts with adiabatic potential information and its diabaticmore » transformation, both of which can be readily obtained, in principle, with quantum chemical calculations. The adiabatic energies and their derivatives on each interpolation center are combined with the derivative coupling vectors to generate the corresponding diabatic Hamiltonian and its derivatives, and they are subsequently adopted in producing a globally defined diabatic Hamiltonian function. As a demonstration, we employ the scheme to build an interpolated Hamiltonian of a relatively large chromophore, para-hydroxybenzylidene imidazolinone, in reference to its all-atom analytical surface model. We show that the interpolation is indeed reliable enough to reproduce important features of the reference surface model, such as its adiabatic energies and derivative couplings. In addition, nonadiabatic surface hopping simulations with interpolation yield population transfer dynamics that is well in accord with the result generated with the reference analytic surface. With these, we conclude by suggesting that the interpolation of diabatic Hamiltonians will be applicable for studying nonadiabatic behaviors of sizeable molecules.« less

  5. Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Campbell, L.; Purviance, J.

    1992-01-01

    A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.

  6. Catmull-Rom Curve Fitting and Interpolation Equations

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2010-01-01

    Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…

  7. High-Fidelity Real-Time Trajectory Optimization for Reusable Launch Vehicles

    DTIC Science & Technology

    2006-12-01

    6.20 Max DR Yawing Moment History. ...............................................................270 Figure 6.21 Snapshot from MATLAB “Profile...Propagation using “ode45” (Euler Angles)...........................................330 Figure 6.114 Interpolated Elevon Controls using Various MATLAB ...Schemes.................332 Figure 6.115 Interpolated Flap Controls using Various MATLAB Schemes.....................333 Figure 6.116 Interpolated

  8. Visualizing and Understanding the Components of Lagrange and Newton Interpolation

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…

  9. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  10. A novel interpolation approach for the generation of 3D-geometric digital bone models from image stacks

    PubMed Central

    Mittag, U.; Kriechbaumer, A.; Rittweger, J.

    2017-01-01

    The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415

  11. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  12. INTERPOL's Surveillance Network in Curbing Transnational Terrorism

    PubMed Central

    Gardeazabal, Javier; Sandler, Todd

    2015-01-01

    Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.

  13. Hermetic turbine generator

    DOEpatents

    Meacher, John S.; Ruscitto, David E.

    1982-01-01

    A Rankine cycle turbine drives an electric generator and a feed pump, all on a single shaft, and all enclosed within a hermetically sealed case. The shaft is vertically oriented with the turbine exhaust directed downward and the shaft is supported on hydrodynamic fluid film bearings using the process fluid as lubricant and coolant. The selection of process fluid, type of turbine, operating speed, system power rating, and cycle state points are uniquely coordinated to achieve high turbine efficiency at the temperature levels imposed by the recovery of waste heat from the more prevalent industrial processes.

  14. The fabrication and test of a dual spin gas bearing reaction wheel

    NASA Technical Reports Server (NTRS)

    Opper, R. L.; Owen, W. J.

    1973-01-01

    The design and fabrication of a dual spin gas bearing reaction wheel are discussed. Numerical analyses, data, and conclusions from performance tests are reported. The unique feature of the reaction wheel is the dual gas bearing concept in which two sets of self-acting hydrodynamic bearing are used to obtain stictionless operation and low noise around zero speed and to accommodate the momentum range from plus 6.8 N-m-s to minus 6.8 N-m-s with the potential for long life inherent in gas bearings.

  15. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  16. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  17. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  18. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  19. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less

  20. Shape Control in Multivariate Barycentric Rational Interpolation

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa Thang; Cuyt, Annie; Celis, Oliver Salazar

    2010-09-01

    The most stable formula for a rational interpolant for use on a finite interval is the barycentric form [1, 2]. A simple choice of the barycentric weights ensures the absence of (unwanted) poles on the real line [3]. In [4] we indicate that a more refined choice of the weights in barycentric rational interpolation can guarantee comonotonicity and coconvexity of the rational interpolant in addition to a polefree region of interest. In this presentation we generalize the above to the multivariate case. We use a product-like form of univariate barycentric rational interpolants and indicate how the location of the poles and the shape of the function can be controlled. This functionality is of importance in the construction of mathematical models that need to express a certain trend, such as in probability distributions, economics, population dynamics, tumor growth models etc.

  1. Pre-inverted SESAME data table construction enhancements to correct unexpected inverse interpolation pathologies in EOSPAC 6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pimentel, David A.; Sheppard, Daniel G.

    It was recently demonstrated that EOSPAC 6 continued to incorrectly create and interpolate pre-inverted SESAME data tables after the release of version 6.3.2beta.2. Significant interpolation pathologies were discovered to occur when EOSPAC 6's host software enabled pre-inversion with the EOS_INVERT_AT_SETUP option. This document describes a solution that uses data transformations found in EOSPAC 5 and its predecessors. The numerical results and performance characteristics of both the default and pre-inverted interpolation modes in both EOSPAC 6.3.2beta.2 and the fixed logic of EOSPAC 6.4.0beta.1 are presented herein, and the latter software release is shown to produce significantly-improved numerical results for the pre-invertedmore » interpolation mode.« less

  2. Illumination estimation via thin-plate spline interpolation.

    PubMed

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  3. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and ordinary kriging. Overall, explanatory variables improve the interpolation results.

  4. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    USGS Publications Warehouse

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.

  5. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  6. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management

    NASA Astrophysics Data System (ADS)

    Yuval; Rimon, Y.; Graber, E. R.; Furman, A.

    2013-07-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli shoreline.

  7. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management.

    PubMed

    Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex

    2014-08-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli shoreline. The implications for aquifer management are discussed.

  8. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    NASA Astrophysics Data System (ADS)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.

  9. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  10. [Improvement of Digital Capsule Endoscopy System and Image Interpolation].

    PubMed

    Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai

    2016-01-01

    Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation

  11. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  12. Is Interpolation Cognitively Encapsulated? Measuring the Effects of Belief on Kanizsa Shape Discrimination and Illusory Contour Formation

    ERIC Educational Resources Information Center

    Keane, Brian P.; Lu, Hongjing; Papathomas, Thomas V.; Silverstein, Steven M.; Kellman, Philip J.

    2012-01-01

    Contour interpolation is a perceptual process that fills-in missing edges on the basis of how surrounding edges (inducers) are spatiotemporally related. Cognitive encapsulation refers to the degree to which perceptual mechanisms act in isolation from beliefs, expectations, and utilities (Pylyshyn, 1999). Is interpolation encapsulated from belief?…

  13. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    PubMed

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  14. On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids

    NASA Astrophysics Data System (ADS)

    Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.

    2017-03-01

    The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.

  15. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less

  16. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  17. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  18. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology

    NASA Technical Reports Server (NTRS)

    Allen, P. A.; Wells, D. N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  19. Generation of signature databases with fast codes

    NASA Astrophysics Data System (ADS)

    Bradford, Robert A.; Woodling, Arthur E.; Brazzell, James S.

    1990-09-01

    Using the FASTSIG signature code to generate optical signature databases for the Ground-based Surveillance and Traking System (GSTS) Program has improved the efficiency of the database generation process. The goal of the current GSTS database is to provide standardized, threat representative target signatures that can easily be used for acquisition and trk studies, discrimination algorithm development, and system simulations. Large databases, with as many as eight interpolalion parameters, are required to maintain the fidelity demands of discrimination and to generalize their application to other strateg systems. As the need increases for quick availability of long wave infrared (LWIR) target signatures for an evolving design4o-threat, FASTSIG has become a database generation alternative to using the industry standard OptiCal Signatures Code (OSC). FASTSIG, developed in 1985 to meet the unique strategic systems demands imposed by the discrimination function, has the significant advantage of being a faster running signature code than the OSC, typically requiring two percent of the cpu time. It uses analytical approximations to model axisymmetric targets, with the fidelity required for discrimination analysis. Access of the signature database is accomplished through use of the waveband integration and interpolation software, INTEG and SIGNAT. This paper gives details of this procedure as well as sample interpolated signatures and also covers sample verification by comparison to the OSC, in order to establish the fidelity of the FASTSIG generated database.

  20. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    NASA Astrophysics Data System (ADS)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  1. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  2. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    NASA Astrophysics Data System (ADS)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  3. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  4. VizieR Online Data Catalog: New atmospheric parameters of MILES cool stars (Sharma+, 2016)

    NASA Astrophysics Data System (ADS)

    Sharma, K.; Prugniel, P.; Singh, H. P.

    2015-11-01

    MILES V2 spectral interpolator The FITS file is an improved version of MILES interpolator previously presented in PVK. It contains the coefficients of the interpolator, which allows one to compute an interpolated spectrum, giving an effective temperature, log of surface gravity and metallicity (Teff, logg, and [Fe/H]). The file consists of three extensions containing the three temperature regimes described in the paper. Extension Teff range 0 warm 4000-9000K 1 hot >7000K 2 cold <4550K The three functions are linearly interpolated in the Teff overlapping regions. Each extension contains a 2D image-type array, whose first axis is the wavelength described by a WCS (Air wavelength, starting at 3536Å, step=0.9Å). This FITS file can be used by the ULySS v1.3 or higher. (5 data files).

  5. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  6. Studying large jellyfish swimming hydrodynamics using a biomimetic robot named Cyro 2

    NASA Astrophysics Data System (ADS)

    Stewart, Colin; Krummel, Gregory; Villanueva, Alex; Marut, Kenneth; Priya, Shashank

    2015-11-01

    Some species of jellyfish can grow to great sizes, such as the lion's mane jellyfish (Cyanea capillata), which can span 2 m in diameter with tentacles 30 m long, roughly the same length as a blue whale. This is an impressive feat for an animal that begins its mobile life three orders of magnitude smaller. Such growth can require a large energy budget, suggesting that Cyanea may be a uniquely efficient swimmer, successful predator, or both. Either accolade would stem from a high level of hydrodynamic mastery as oblate jellyfish like Cyanea rely on the flow currents generated by bell pulsation for both propulsive thrust and prey encounter. However, further investigation has been hindered by the lack of reported quantitative flow measurements, perhaps due to the logistic challenges inherent to studying large specimen in vivo. Here, we used a 50 cm diameter biomimetic Cyanea robot named Cyro 2 as a proxy to study the hydrodynamics of large jellyfish. The effect of different trailing structure morphologies (e.g. oral arms and tentacles), swimming gaits, and kinematics on flow patterns were measured using PIV. Baseline swimming performance using biomimetic settings (but no trailing structures) was characterized by a cycle average velocity of 6.58 cm s-1, thrust of 1.9 N, and power input of 5.7 W, yielding a vehicle efficiency of 2.2% and a cost of transport of 15.4 J kg-1 m-1.

  7. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are required. Nature of problem: In inertial confinement fusion and related experiments with lasers and particle beams, energy transport by thermal radiation becomes important. Under these conditions, the radiation field strongly interacts with the hydrodynamic motion through emission and absorption processes. Solution method: The equations of radiation transfer coupled with Lagrangian hydrodynamics, heat diffusion and beam tracing (laser or ions) are solved, in two-dimensional axial-symmetric geometry ( R-Z coordinates) using a fractional step scheme. Radiation transfer is solved with angular resolution. Matter properties are either interpolated from tables (equations-of-state and opacities) or computed by user routines (conductivities and beam attenuation). Restrictions: The code has been designed for typical conditions prevailing in inertial confinement fusion (ns time scale, matter states close to local thermodynamical equilibrium, negligible radiation pressure, …). Although a wider range of situations can be treated, extrapolations to regions beyond this design range need special care. Unusual features: A special computer language, called r94, is used at top levels of the code. These parts have to be converted to standard C by a translation program (supplied as part of the package). Due to the complexity of code (hydro-code, grid generation, user interface, graphic post-processor, translator program, installation scripts) extensive manuals are supplied as part of the package. Running time: 567 seconds for the example supplied.

  8. Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis

    NASA Astrophysics Data System (ADS)

    Li, Yue; Li, Chang-Tsun

    The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.

  9. The cosmic child: The artwork of Joseph Cornell and a type of unusual sensibility, or thinking inside the box: the mind that channels infinity.

    PubMed

    Scheftel, Susan

    2009-01-01

    This paper explores the unique mind of the twentieth- century American artist Joseph Cornell, known for his boxes and collages made with "found" materials. The author interpolates reflections upon Cornell with vignettes from the treatment of a young child, speculating that certain individuals may possess a constellation of vulnerabilities/sensitivities that constitute what is referred to as a "cosmic" sensibility. It is suggested that such an orientation can lead variously to anxieties and separation problems, as well as (or in addition to) intellectual and/or artistic giftedness. The outcome of such dynamics would depend on a complex interplay of temperament, circumstance, and relational attunement.

  10. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  11. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  12. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  13. Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.

    PubMed

    de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2008-01-01

    The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).

  14. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  15. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    PubMed

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  16. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  17. GPU color space conversion

    NASA Astrophysics Data System (ADS)

    Chase, Patrick; Vondran, Gary

    2011-01-01

    Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.

  18. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  19. Hydrodynamic Modeling of the Deep Impact Mission into Comet Tempel 1

    NASA Astrophysics Data System (ADS)

    Sorli, Kya; Remington, Tané; Bruck Syal, Megan

    2018-01-01

    Kinetic impact is one of the primary strategies to deflect hazardous objects off of an Earth-impacting trajectory. The only test of a small-body impact is the 2005 Deep Impact mission into comet Tempel 1, where a 366-kg mass impactor collided at ~10 km/s into the comet, liberating an enormous amount of vapor and ejecta. Code comparisons with observations of the event represent an important source of new information about the initial conditions of small bodies and an extraordinary opportunity to test our simulation capabilities on a rare, full-scale experiment. Using the Adaptive Smoothed Particle Hydrodynamics (ASPH) code, Spheral, we explore how variations in target material properties such as strength, composition, porosity, and layering affect impact results, in order to best match the observed crater size and ejecta evolution. Benchmarking against this unique small-body experiment provides an enhanced understanding of our ability to simulate asteroid or comet response to future deflection missions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-739336-DRAFT.

  20. Measurement of Hydrodynamic Growth near Peak Velocity in an Inertial Confinement Fusion Capsule Implosion using a Self-Radiography Technique

    NASA Astrophysics Data System (ADS)

    Pickworth, L. A.; Hammel, B. A.; Smalyuk, V. A.; MacPhee, A. G.; Scott, H. A.; Robey, H. F.; Landen, O. L.; Barrios, M. A.; Regan, S. P.; Schneider, M. B.; Hoppe, M.; Kohut, T.; Holunga, D.; Walters, C.; Haid, B.; Dayton, M.

    2016-07-01

    First measurements of hydrodynamic growth near peak implosion velocity in an inertial confinement fusion (ICF) implosion at the National Ignition Facility were obtained using a self-radiographing technique and a preimposed Legendre mode 40, λ =140 μ m , sinusoidal perturbation. These are the first measurements of the total growth at the most unstable mode from acceleration Rayleigh-Taylor achieved in any ICF experiment to date, showing growth of the areal density perturbation of ˜7000 × . Measurements were made at convergences of ˜5 to ˜10 × at both the waist and pole of the capsule, demonstrating simultaneous measurements of the growth factors from both lines of sight. The areal density growth factors are an order of magnitude larger than prior experimental measurements and differed by ˜2 × between the waist and the pole, showing asymmetry in the measured growth factors. These new measurements significantly advance our ability to diagnose perturbations detrimental to ICF implosions, uniquely intersecting the change from an accelerating to decelerating shell, with multiple simultaneous angular views.

  1. Highly bacterial resistant silver nanoparticles: synthesis and antibacterial activities

    NASA Astrophysics Data System (ADS)

    Chudasama, Bhupendra; Vala, Anjana K.; Andhariya, Nidhi; Mehta, R. V.; Upadhyay, R. V.

    2010-06-01

    In this article, we describe a simple one-pot rapid synthesis route to produce uniform silver nanoparticles by thermal reduction of AgNO3 using oleylamine as reducing and capping agent. To enhance the dispersal ability of as-synthesized hydrophobic silver nanoparticles in water, while maintaining their unique properties, a facile phase transfer mechanism has been developed using biocompatible block co-polymer pluronic F-127. Formation of silver nanoparticles is confirmed by X-ray diffraction (XRD), transmission electron microscopy (TEM) and UV-vis spectroscopy. Hydrodynamic size and its distribution are obtained from dynamic light scattering (DLS). Hydrodynamic size and size distribution of as-synthesized and phase transferred silver nanoparticles are 8.2 ± 1.5 nm (σ = 18.3%) and 31.1 ± 4.5 nm (σ = 14.5%), respectively. Antimicrobial activities of hydrophilic silver nanoparticles is tested against two Gram positive ( Bacillus megaterium and Staphylococcus aureus), and three Gram negative ( Escherichia coli, Proteus vulgaris and Shigella sonnei) bacteria. Minimum inhibitory concentration (MIC) values obtained in the present study for the tested microorganisms are found much better than those reported for commercially available antibacterial agents.

  2. Measurement of hydrodynamic growth near peak velocity in an inertial confinement fusion capsule implosion using a self-radiography technique

    DOE PAGES

    Pickworth, L. A.; Hammel, B. A.; Smalyuk, V. A.; ...

    2016-07-11

    First measurements of hydrodynamic growth near peak implosion velocity in an inertial confinement fusion (ICF) implosion at the National Ignition Facility were obtained using a self-radiographing technique and a preimposed Legendre mode 40, λ = 140 μm, sinusoidal perturbation. These are the first measurements of the total growth at the most unstable mode from acceleration Rayleigh-Taylor achieved in any ICF experiment to date, showing growth of the areal density perturbation of ~7000×. Measurements were made at convergences of ~5 to ~10× at both the waist and pole of the capsule, demonstrating simultaneous measurements of the growth factors from both linesmore » of sight. The areal density growth factors are an order of magnitude larger than prior experimental measurements and differed by ~2× between the waist and the pole, showing asymmetry in the measured growth factors. As a result, these new measurements significantly advance our ability to diagnose perturbations detrimental to ICF implosions, uniquely intersecting the change from an accelerating to decelerating shell, with multiple simultaneous angular views.« less

  3. Sizing protein-templated gold nanoclusters by time resolved fluorescence anisotropy decay measurements.

    PubMed

    Soleilhac, Antonin; Bertorelle, Franck; Antoine, Rodolphe

    2018-03-15

    Protein-templated gold nanoclusters (AuNCs) are very attractive due to their unique fluorescence properties. A major problem however may arise due to protein structure changes upon the nucleation of an AuNC within the protein for any future use as in vivo probes, for instance. In this work, we propose a simple and reliable fluorescence based technique measuring the hydrodynamic size of protein-templated gold nanoclusters. This technique uses the relation between the time resolved fluorescence anisotropy decay and the hydrodynamic volume, through the rotational correlation time. We determine the molecular size of protein-directed AuNCs, with protein templates of increasing sizes, e.g. insulin, lysozyme, and bovine serum albumin (BSA). The comparison of sizes obtained by other techniques (e.g. dynamic light scattering and small-angle X-ray scattering) between bare and gold clusters containing proteins allows us to address the volume changes induced either by conformational changes (for BSA) or the formation of protein dimers (for insulin and lysozyme) during cluster formation and incorporation. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  5. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  6. Interpolated twitches in fatiguing single mouse muscle fibres: implications for the assessment of central fatigue

    PubMed Central

    Place, Nicolas; Yamada, Takashi; Bruton, Joseph D; Westerblad, Håkan

    2008-01-01

    An electrically evoked twitch during a maximal voluntary contraction (twitch interpolation) is frequently used to assess central fatigue. In this study we used intact single muscle fibres to determine if intramuscular mechanisms could affect the force increase with the twitch interpolation technique. Intact single fibres from flexor digitorum brevis of NMRI mice were dissected and mounted in a chamber equipped with a force transducer. Free myoplasmic [Ca2+] ([Ca2+]i) was measured with the fluorescent Ca2+ indicator indo-1. Seven fibres were fatigued with repeated 70 Hz tetani until 40% initial force with an interpolated pulse evoked every fifth tetanus. Results showed that the force generated by the interpolated twitch increased throughout fatigue, being 9 ± 1% of tetanic force at the start and 19 ± 1% at the end (P < 0.001). This was not due to a larger increase in [Ca2+]i induced by the interpolated twitch during fatigue but rather to the fact that the force–[Ca2+]i relationship is sigmoidal and fibres entered a steeper part of the relationship during fatigue. In another set of experiments, we observed that repeated tetani evoked at 150 Hz resulted in more rapid fatigue development than at 70 Hz and there was a decrease in force (‘sag’) during contractions, which was not observed at 70 Hz. In conclusion, the extent of central fatigue is difficult to assess and it may be overestimated when using the twitch interpolation technique. PMID:18403421

  7. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  8. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  9. Novel Digital Signal Processing and Detection Techniques.

    DTIC Science & Technology

    1980-09-01

    decimation and interpolation [11, 1 2]. * Submitted by: Bede Liu Department of Electrical .l Engineering and Computer Science Princeton University ...on the use of recursive filters for decimation and interpolation. 4- UNCL.ASSIFIED~ SECURITY CLASSIFICATION OF PAGEfW1,en Data Fneprd) ...filter structure for realizing low-pass filter is developed 16,7]. By employing decimation and interpolation, the filter uses only coefficients 0, +1, and

  10. Dynamics of primary and secondary microbubbles created by laser-induced breakdown of an optically trapped nanoparticle

    PubMed Central

    Arita, Y.; Antkowiak, M.; Venugopalan, V.; Gunn-Moore, F. J.; Dholakia, K.

    2012-01-01

    Laser-induced breakdown of an optically trapped nanoparticle is a unique system for studying cavitation dynamics. It offers additional degrees of freedom, namely the nanoparticle material, its size, and the relative position between the laser focus and the center of the optically trapped nanoparticle. We quantify the spatial and temporal dynamics of the cavitation and secondary bubbles created in this system and use hydrodynamic modeling to quantify the observed dynamic shear stress of the expanding bubble. In the final stage of bubble collapse, we visualize the formation of multiple submicrometer secondary bubbles around the toroidal bubble on the substrate. We show that the pattern of the secondary bubbles typically has its circular symmetry broken along an axis whose unique angle rotates over time. This is a result of vorticity along the jet towards the boundary upon bubble collapse near solid boundaries. PMID:22400669

  11. Oscillation of the velvet worm slime jet by passive hydrodynamic instability

    PubMed Central

    Concha, Andrés; Mellado, Paula; Morera-Brenes, Bernal; Sampaio Costa, Cristiano; Mahadevan, L; Monge-Nájera, Julián

    2015-01-01

    The rapid squirt of a proteinaceous slime jet endows velvet worms (Onychophora) with a unique mechanism for defence from predators and for capturing prey by entangling them in a disordered web that immobilizes their target. However, to date, neither qualitative nor quantitative descriptions have been provided for this unique adaptation. Here we investigate the fast oscillatory motion of the oral papillae and the exiting liquid jet that oscillates with frequencies f~30–60 Hz. Using anatomical images, high-speed videography, theoretical analysis and a physical simulacrum, we show that this fast oscillatory motion is the result of an elastohydrodynamic instability driven by the interplay between the elasticity of oral papillae and the fast unsteady flow during squirting. Our results demonstrate how passive strategies can be cleverly harnessed by organisms, while suggesting future oscillating microfluidic devices, as well as novel ways for micro and nanofibre production using bioinspired strategies. PMID:25780995

  12. Aquaporin-4 Functionality and Virchow-Robin Space Water Dynamics: Physiological Model for Neurovascular Coupling and Glymphatic Flow

    PubMed Central

    Kwee, Ingrid L.

    2017-01-01

    The unique properties of brain capillary endothelium, critical in maintaining the blood-brain barrier (BBB) and restricting water permeability across the BBB, have important consequences on fluid hydrodynamics inside the BBB hereto inadequately recognized. Recent studies indicate that the mechanisms underlying brain water dynamics are distinct from systemic tissue water dynamics. Hydrostatic pressure created by the systolic force of the heart, essential for interstitial circulation and lymphatic flow in systemic circulation, is effectively impeded from propagating into the interstitial fluid inside the BBB by the tightly sealed endothelium of brain capillaries. Instead, fluid dynamics inside the BBB is realized by aquaporin-4 (AQP-4), the water channel that connects astrocyte cytoplasm and extracellular (interstitial) fluid. Brain interstitial fluid dynamics, and therefore AQP-4, are now recognized as essential for two unique functions, namely, neurovascular coupling and glymphatic flow, the brain equivalent of systemic lymphatics. PMID:28820467

  13. Linear Water Waves

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

  14. Aquaporin-4 Functionality and Virchow-Robin Space Water Dynamics: Physiological Model for Neurovascular Coupling and Glymphatic Flow.

    PubMed

    Nakada, Tsutomu; Kwee, Ingrid L; Igarashi, Hironaka; Suzuki, Yuji

    2017-08-18

    The unique properties of brain capillary endothelium, critical in maintaining the blood-brain barrier (BBB) and restricting water permeability across the BBB, have important consequences on fluid hydrodynamics inside the BBB hereto inadequately recognized. Recent studies indicate that the mechanisms underlying brain water dynamics are distinct from systemic tissue water dynamics. Hydrostatic pressure created by the systolic force of the heart, essential for interstitial circulation and lymphatic flow in systemic circulation, is effectively impeded from propagating into the interstitial fluid inside the BBB by the tightly sealed endothelium of brain capillaries. Instead, fluid dynamics inside the BBB is realized by aquaporin-4 (AQP-4), the water channel that connects astrocyte cytoplasm and extracellular (interstitial) fluid. Brain interstitial fluid dynamics, and therefore AQP-4, are now recognized as essential for two unique functions, namely, neurovascular coupling and glymphatic flow, the brain equivalent of systemic lymphatics.

  15. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  16. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  17. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  18. Exact finite elements for conduction and convection

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Dechaumphai, P.; Tamma, K. K.

    1981-01-01

    An approach for developing exact one dimensional conduction-convection finite elements is presented. Exact interpolation functions are derived based on solutions to the governing differential equations by employing a nodeless parameter. Exact interpolation functions are presented for combined heat transfer in several solids of different shapes, and for combined heat transfer in a flow passage. Numerical results demonstrate that exact one dimensional elements offer advantages over elements based on approximate interpolation functions.

  19. Color characterization of cine film

    NASA Astrophysics Data System (ADS)

    Noriega, Leonardo; Morovic, Jan; MacDonald, Lindsay W.; Lempp, Wolfgang

    2002-06-01

    This paper describes the characterization of cine film, by identifying the relationship between the Status A density values of positive print film and the XYZ values of conventional colorimetry. Several approaches are tried including least-squares modeling, tetrahedral interpolation, and distance weighted interpolation. The distance weighted technique has been improved by the use of the Mahalanobis distance metric in order to perform the interpolation, and this is presented as an innovation.

  20. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  1. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    PubMed

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  2. Interpolation by fast Wigner transform for rapid calculations of magnetic resonance spectra from powders.

    PubMed

    Stevensson, Baltzar; Edén, Mattias

    2011-03-28

    We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.

  3. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  4. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  5. Interpolation of diffusion weighted imaging datasets.

    PubMed

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R

    2014-12-01

    Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments. Copyright © 2014. Published by Elsevier Inc.

  6. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  7. Contour interpolation: A case study in Modularity of Mind.

    PubMed

    Keane, Brian P

    2018-05-01

    In his monograph Modularity of Mind (1983), philosopher Jerry Fodor argued that mental architecture can be partly decomposed into computational organs termed modules, which were characterized as having nine co-occurring features such as automaticity, domain specificity, and informational encapsulation. Do modules exist? Debates thus far have been framed very generally with few, if any, detailed case studies. The topic is important because it has direct implications on current debates in cognitive science and because it potentially provides a viable framework from which to further understand and make hypotheses about the mind's structure and function. Here, the case is made for the modularity of contour interpolation, which is a perceptual process that represents non-visible edges on the basis of how surrounding visible edges are spatiotemporally configured. There is substantial evidence that interpolation is domain specific, mandatory, fast, and developmentally well-sequenced; that it produces representationally impoverished outputs; that it relies upon a relatively fixed neural architecture that can be selectively impaired; that it is encapsulated from belief and expectation; and that its inner workings cannot be fathomed through conscious introspection. Upon differentiating contour interpolation from a higher-order contour representational ability ("contour abstraction") and upon accommodating seemingly inconsistent experimental results, it is argued that interpolation is modular to the extent that the initiating conditions for interpolation are strong. As interpolated contours become more salient, the modularity features emerge. The empirical data, taken as a whole, show that at least certain parts of the mind are modularly organized. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Interpolating precipitation and its relation to runoff and non-point source pollution.

    PubMed

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  9. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  10. Resonance Production in Heavy-Ion Collisions

    NASA Astrophysics Data System (ADS)

    Knospe, Anders G.

    2018-02-01

    Hadronic resonances are unique probes that allow the properties of heavyion collisions to be studied. Topics that can be studied include modification of spectral shapes, in-medium energy loss of parsons, vector-meson spin alignment, hydrodynamic flow, recombination, strangeness production, and the properties of the hadronic phase. Measurements of resonances in p+p, p+A, and d+A collisions serve as baselines for heavy-ion studies and also permit searches for possible collective effects in these smaller systems. These proceedings present a selection of results related to these topics from experiments at RHIC, LHC, and other facilities, as well as comparisons to theoretical models.

  11. Experimental model of the role of cracks in the mechanism of explosive eruption of St. Helens-80

    NASA Astrophysics Data System (ADS)

    Kedrinskii, V. K.; Skulkin, A. A.

    2017-07-01

    A unique mini model of explosive volcano eruption through a formed system of cracks is developed. The process of crack formation and development is simulated by electric explosion of a conductor in a plate of optically transparent organic glass submerged into water. The explosion of a wire aligned with a through hole in the plate generates shock-wave loading along the plate and forms cracks. The fundamental role of high velocity flow in crack wedging by a high power hydrodynamic flow of a pulsating explosion cavity has been demonstrated.

  12. Suitability of Spatial Interpolation Techniques in Varying Aquifer Systems of a Basaltic Terrain for Monitoring Groundwater Availability

    NASA Astrophysics Data System (ADS)

    Katpatal, Y. B.; Paranjpe, S. V.; Kadu, M. S.

    2017-12-01

    Geological formations act as aquifer systems and variability in the hydrological properties of aquifers have control over groundwater occurrence and dynamics. To understand the groundwater availability in any terrain, spatial interpolation techniques are widely used. It has been observed that, with varying hydrogeological conditions, even in a geologically homogenous set up, there are large variations in observed groundwater levels. Hence, the accuracy of groundwater estimation depends on the use of appropriate interpretation techniques. The study area of the present study is Venna Basin of Maharashtra State, India which is a basaltic terrain with four different types of basaltic layers laid down horizontally; weathered vesicular basalt, weathered and fractured basalt, highly weathered unclassified basalt and hard massive basalt. The groundwater levels vary with topography as different types of basalts are present at varying depths. The local stratigraphic profiles were generated at different types of basaltic terrains. The present study aims to interpolate the groundwater levels within the basin and to check the co-relation between the estimated and the observed values. The groundwater levels for 125 observation wells situated in these different basaltic terrains for 20 years (1995 - 2015) have been used in the study. The interpolation was carried out in Geographical Information System (GIS) using ordinary kriging and Inverse Distance Weight (IDW) method. A comparative analysis of the interpolated values of groundwater levels is carried out for validating the recorded groundwater level dataset. The results were co-related to various types of basaltic terrains present in basin forming the aquifer systems. Mean Error (ME) and Mean Square Errors (MSE) have been computed and compared. It was observed that within the interpolated values, a good correlation does not exist between the two interpolation methods used. The study concludes that in crystalline basaltic terrain, interpolation methods must be verified with the changes in the geological profiles.

  13. Potentials Unbounded Below

    NASA Astrophysics Data System (ADS)

    Curtright, Thomas

    2011-04-01

    Continuous interpolates are described for classical dynamical systems defined by discrete time-steps. Functional conjugation methods play a central role in obtaining the interpolations. The interpolates correspond to particle motion in an underlying potential, V. Typically, V has no lower bound and can exhibit switchbacks wherein V changes form when turning points are encountered by the particle. The Beverton-Holt and Skellam models of population dynamics, and particular cases of the logistic map are used to illustrate these features.

  14. Quantum interpolation for high-resolution sensing

    PubMed Central

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-01-01

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy. PMID:28196889

  15. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  16. Quantum interpolation for high-resolution sensing.

    PubMed

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-02-28

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy.

  17. Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing

    NASA Technical Reports Server (NTRS)

    Schowengerdt, R.; Gray, S.; Park, S. K.

    1984-01-01

    Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.

  18. DCT based interpolation filter for motion compensation in HEVC

    NASA Astrophysics Data System (ADS)

    Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin

    2012-10-01

    High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.

  19. Assimilation of satellite surface-height anomalies data into a Hybrid Coordinate Ocean Model (HYCOM) over the Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Tanajura, C. A. S.; Lima, L. N.; Belyaev, K. P.

    2015-09-01

    The data of sea height anomalies calculated along the tracks of the Jason-1 and Jason-2 satellites are assimilated into the HYCOM hydrodynamic ocean model developed at the University of Miami, USA. We used a known method of data assimilation, the so-called ensemble method of the optimal interpolation scheme (EnOI). In this work, we study the influence of the assimilation of sea height anomalies on other variables of the model. The behavior of the time series of the analyzed and predicted values of the model is compared with a reference calculation (free run), i.e., with the behavior of model variables without assimilation but under the same initial and boundary conditions. The results of the simulation are also compared with the independent data of observations on moorings of the Pilot Research Array in the Tropical Atlantic (PIRATA) and the data of the ARGO floats using objective metrics. The investigations demonstrate that data assimilation under specific conditions results in a significant improvement of the 24-h prediction of the ocean state. The experiments also show that the assimilated fields of the ocean level contain a clearly pronounced mesoscale variability; thus they quantitatively differ from the dynamics obtained in the reference experiment.

  20. Time-delayed feedback technique for suppressing instabilities in time-periodic flow

    NASA Astrophysics Data System (ADS)

    Shaabani-Ardali, Léopold; Sipp, Denis; Lesshafft, Lutz

    2017-11-01

    A numerical method is presented that allows to compute time-periodic flow states, even in the presence of hydrodynamic instabilities. The method is based on filtering nonharmonic components by way of delayed feedback control, as introduced by Pyragas [Phys. Lett. A 170, 421 (1992), 10.1016/0375-9601(92)90745-8]. Its use in flow problems is demonstrated here for the case of a periodically forced laminar jet, subject to a subharmonic instability that gives rise to vortex pairing. The optimal choice of the filter gain, which is a free parameter in the stabilization procedure, is investigated in the context of a low-dimensional model problem, and it is shown that this model predicts well the filter performance in the high-dimensional flow system. Vortex pairing in the jet is efficiently suppressed, so that the unstable periodic flow state in response to harmonic forcing is accurately retrieved. The procedure is straightforward to implement inside any standard flow solver. Memory requirements for the delayed feedback control can be significantly reduced by means of time interpolation between checkpoints. Finally, the method is extended for the treatment of periodic problems where the frequency is not known a priori. This procedure is demonstrated for a three-dimensional cubic lid-driven cavity in supercritical conditions.

  1. Lattice Boltzmann simulation of viscoelastic flow past a confined free rotating cylinder

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Zhang, Peijie; Lin, Jianzhong; Ku, Xiaoke; Nie, Deming

    2018-05-01

    To study the dynamics of rigid body immersed in viscoelastic fluid, an Oldroyd-B fluid flow past an eccentrically situated, free rotating cylinder in a two-dimensional (2D) channel is simulated by a novel lattice Boltzmann method. Two distribution functions are employed, one of which is aimed to solve Navier-Stokes equation and the other to the constitutive equation, respectively. The unified interpolation bounce-back scheme is adopted to treat the moving curved boundary of cylinder, and the novel Galilean invariant momentum exchange method is utilized to obtain the hydrodynamic force and torque exerted on the cylinder. Results show that the center-fixed cylinder rotates inversely in the direction where a cylinder immersed in Newtonian fluid do, which generates a centerline-oriented lift force according to Magnus effect. The cylinder’s eccentricity, flow inertia, fluid elasticity and viscosity would affect the rotation of cylinder in different ways. The cylinder rotates more rapidly when located farther away from the centerline, and slows down when it is too close to the wall. The rotation frequency decreases with increasing Reynolds number, and larger rotation frequency responds to larger Weissenberg number and smaller viscosity ratio, indicating that the fluid elasticity and low solvent viscosity accelerates the flow-induced rotation of cylinder.

  2. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    NASA Astrophysics Data System (ADS)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  3. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less

  4. The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Kochoska, Angela; Prša, Andrej; Horvat, Martin

    2018-01-01

    Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.

  5. Hydrodynamic trapping for rapid assembly and in situ electrical characterization of droplet interface bilayer arrays

    DOE PAGES

    Nguyen, Mary -Anne; Srijanto, Bernadeta; Collier, C. Patrick; ...

    2016-08-02

    The droplet interface bilayer (DIB) is a modular technique for assembling planar lipid membranes between water droplets in oil. The DIB method thus provides a unique capability for developing digital, droplet-based membrane platforms for rapid membrane characterization, drug screening and ion channel recordings. This paper demonstrates a new, low-volume microfluidic system that automates droplet generation, sorting, and sequential trapping in designated locations to enable the rapid assembly of arrays of DIBs. The channel layout of the device is guided by an equivalent circuit model, which predicts that a serial arrangement of hydrodynamic DIB traps enables sequential droplet placement and minimizesmore » the hydrodynamic pressure developed across filled traps to prevent squeeze-through of trapped droplets. Furthermore, the incorporation of thin-film electrodes fabricated via evaporation metal deposition onto the glass substrate beneath the channels allows for the first time in situ, simultaneous electrical interrogation of multiple DIBs within a sealed device. Combining electrical measurements with imaging enables measurements of membrane capacitance and resistance and bilayer area, and our data show that DIBs formed in different trap locations within the device exhibit similar sizes and transport properties. Simultaneous, single channel recordings of ion channel gating in multiple membranes are obtained when alamethicin peptides are incorporated into the captured droplets, qualifying the thin-film electrodes as a means for measuring stimuli-responsive functions of membrane-bound biomolecules. Furthermore, this novel microfluidic-electrophysiology platform provides a reproducible, high throughput method for performing electrical measurements to study transmembrane proteins and biomembranes in low-volume, droplet-based membranes.« less

  6. Hydrodynamic trapping for rapid assembly and in situ electrical characterization of droplet interface bilayer arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Mary -Anne; Srijanto, Bernadeta; Collier, C. Patrick

    The droplet interface bilayer (DIB) is a modular technique for assembling planar lipid membranes between water droplets in oil. The DIB method thus provides a unique capability for developing digital, droplet-based membrane platforms for rapid membrane characterization, drug screening and ion channel recordings. This paper demonstrates a new, low-volume microfluidic system that automates droplet generation, sorting, and sequential trapping in designated locations to enable the rapid assembly of arrays of DIBs. The channel layout of the device is guided by an equivalent circuit model, which predicts that a serial arrangement of hydrodynamic DIB traps enables sequential droplet placement and minimizesmore » the hydrodynamic pressure developed across filled traps to prevent squeeze-through of trapped droplets. Furthermore, the incorporation of thin-film electrodes fabricated via evaporation metal deposition onto the glass substrate beneath the channels allows for the first time in situ, simultaneous electrical interrogation of multiple DIBs within a sealed device. Combining electrical measurements with imaging enables measurements of membrane capacitance and resistance and bilayer area, and our data show that DIBs formed in different trap locations within the device exhibit similar sizes and transport properties. Simultaneous, single channel recordings of ion channel gating in multiple membranes are obtained when alamethicin peptides are incorporated into the captured droplets, qualifying the thin-film electrodes as a means for measuring stimuli-responsive functions of membrane-bound biomolecules. Furthermore, this novel microfluidic-electrophysiology platform provides a reproducible, high throughput method for performing electrical measurements to study transmembrane proteins and biomembranes in low-volume, droplet-based membranes.« less

  7. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2017-03-09

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  8. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy

    2016-07-01

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FASTmore » wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).« less

  9. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1993-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.

  10. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  11. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2018-01-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  12. On the interpolation of volumetric water content in research catchments

    NASA Astrophysics Data System (ADS)

    Dlamini, Phesheya; Chaplot, Vincent

    Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.

  13. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  14. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  15. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    NASA Astrophysics Data System (ADS)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  16. An interpolated activity during the knowledge-of-results delay interval eliminates the learning advantages of self-controlled feedback schedules.

    PubMed

    Carter, Michael J; Ste-Marie, Diane M

    2017-03-01

    The learning advantages of self-controlled knowledge-of-results (KR) schedules compared to yoked schedules have been linked to the optimization of the informational value of the KR received for the enhancement of one's error-detection capabilities. This suggests that information-processing activities that occur after motor execution, but prior to receiving KR (i.e., the KR-delay interval) may underlie self-controlled KR learning advantages. The present experiment investigated whether self-controlled KR learning benefits would be eliminated if an interpolated activity was performed during the KR-delay interval. Participants practiced a waveform matching task that required two rapid elbow extension-flexion reversals in one of four groups using a factorial combination of choice (self-controlled, yoked) and KR-delay interval (empty, interpolated). The waveform had specific spatial and temporal constraints, and an overall movement time goal. The results indicated that the self-controlled + empty group had superior retention and transfer scores compared to all other groups. Moreover, the self-controlled + interpolated and yoked + interpolated groups did not differ significantly in retention and transfer; thus, the interpolated activity eliminated the typically found learning benefits of self-controlled KR. No significant differences were found between the two yoked groups. We suggest the interpolated activity interfered with information-processing activities specific to self-controlled KR conditions that occur during the KR-delay interval and that these activities are vital for reaping the associated learning benefits. These findings add to the growing evidence that challenge the motivational account of self-controlled KR learning advantages and instead highlights informational factors associated with the KR-delay interval as an important variable for motor learning under self-controlled KR schedules.

  17. Exact finite elements for conduction and convection

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Dechaumphai, P.; Tamma, K. K.

    1981-01-01

    An appproach for developing exact one dimensional conduction-convection finite elements is presented. Exact interpolation functions are derived based on solutions to the governing differential equations by employing a nodeless parameter. Exact interpolation functions are presented for combined heat transfer in several solids of different shapes, and for combined heat transfer in a flow passage. Numerical results demonstrate that exact one dimensional elements offer advantages over elements based on approximate interpolation functions. Previously announced in STAR as N81-31507

  18. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  19. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  20. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  1. The Effect of Elevation Bias in Interpolated Air Temperature Data Sets on Surface Warming in China During 1951-2015

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Sun, Fubao; Ge, Quansheng; Kleidon, Axel; Liu, Wenbin

    2018-02-01

    Although gridded air temperature data sets share much of the same observations, different rates of warming can be detected due to different approaches employed for considering elevation signatures in the interpolation processes. Here we examine the influence of varying spatiotemporal distribution of sites on surface warming in the long-term trend and over the recent warming hiatus period in China during 1951-2015. A suspicious cooling trend in raw interpolated air temperature time series is found in the 1950s, and 91% of which can be explained by the artificial elevation changes introduced by the interpolation process. We define the regression slope relating temperature difference and elevation difference as the bulk lapse rate of -5.6°C/km, which tends to be higher (-8.7°C/km) in dry regions but lower (-2.4°C/km) in wet regions. Compared to independent experimental observations, we find that the estimated monthly bulk lapse rates work well to capture the elevation bias. Significant improvement can be achieved in adjusting the interpolated original temperature time series using the bulk lapse rate. The results highlight that the developed bulk lapse rate is useful to account for the elevation signature in the interpolation of site-based surface air temperature to gridded data sets and is necessary for avoiding elevation bias in climate change studies.

  2. Geographic patterns and dynamics of Alaskan climate interpolated from a sparse station record

    USGS Publications Warehouse

    Fleming, Michael D.; Chapin, F. Stuart; Cramer, W.; Hufford, Gary L.; Serreze, Mark C.

    2000-01-01

    Data from a sparse network of climate stations in Alaska were interpolated to provide 1-km resolution maps of mean monthly temperature and precipitation-variables that are required at high spatial resolution for input into regional models of ecological processes and resource management. The interpolation model is based on thin-plate smoothing splines, which uses the spatial data along with a digital elevation model to incorporate local topography. The model provides maps that are consistent with regional climatology and with patterns recognized by experienced weather forecasters. The broad patterns of Alaskan climate are well represented and include latitudinal and altitudinal trends in temperature and precipitation and gradients in continentality. Variations within these broad patterns reflect both the weakening and reduction in frequency of low-pressure centres in their eastward movement across southern Alaska during the summer, and the shift of the storm tracks into central and northern Alaska in late summer. Not surprisingly, apparent artifacts of the interpolated climate occur primarily in regions with few or no stations. The interpolation model did not accurately represent low-level winter temperature inversions that occur within large valleys and basins. Along with well-recognized climate patterns, the model captures local topographic effects that would not be depicted using standard interpolation techniques. This suggests that similar procedures could be used to generate high-resolution maps for other high-latitude regions with a sparse density of data.

  3. Restoring method for missing data of spatial structural stress monitoring based on correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  4. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  5. Reconstruction of Missing Pixels in Satellite Images Using the Data Interpolating Empirical Orthogonal Function (DINEOF)

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, M.

    2016-02-01

    For coastal and inland waters, complete (in spatial) and frequent satellite measurements are important in order to monitor and understand coastal biological and ecological processes and phenomena, such as diurnal variations. High-frequency images of the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)) derived from the Korean Geostationary Ocean Color Imager (GOCI) provide a unique opportunity to study diurnal variation of the water turbidity in coastal regions of the Bohai Sea, Yellow Sea, and East China Sea. However, there are lots of missing pixels in the original GOCI-derived Kd(490) images due to clouds and various other reasons. Data Interpolating Empirical Orthogonal Function (DINEOF) is a method to reconstruct missing data in geophysical datasets based on Empirical Orthogonal Function (EOF). In this study, the DINEOF is applied to GOCI-derived Kd(490) data in the Yangtze River mouth and the Yellow River mouth regions, the DINEOF reconstructed Kd(490) data are used to fill in the missing pixels, and the spatial patterns and temporal functions of the first three EOF modes are also used to investigate the sub-diurnal variation due to the tidal forcing. In addition, DINEOF method is also applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (SNPP) satellite to reconstruct missing pixels in the daily Kd(490) and chlorophyll-a concentration images, and some application examples in the Chesapeake Bay and the Gulf of Mexico will be presented.

  6. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  7. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    PubMed

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  8. A New Method for Computed Tomography Angiography (CTA) Imaging via Wavelet Decomposition-Dependented Edge Matching Interpolation.

    PubMed

    Li, Zeyu; Chen, Yimin; Zhao, Yan; Zhu, Lifeng; Lv, Shengqing; Lu, Jiahui

    2016-08-01

    The interpolation technique of computed tomography angiography (CTA) image provides the ability for 3D reconstruction, as well as reduces the detect cost and the amount of radiation. However, most of the image interpolation algorithms cannot take the automation and accuracy into account. This study provides a new edge matching interpolation algorithm based on wavelet decomposition of CTA. It includes mark, scale and calculation (MSC). Combining the real clinical image data, this study mainly introduces how to search for proportional factor and use the root mean square operator to find a mean value. Furthermore, we re- synthesize the high frequency and low frequency parts of the processed image by wavelet inverse operation, and get the final interpolation image. MSC can make up for the shortage of the conventional Computed Tomography (CT) and Magnetic Resonance Imaging(MRI) examination. The radiation absorption and the time to check through the proposed synthesized image were significantly reduced. In clinical application, it can help doctor to find hidden lesions in time. Simultaneously, the patients get less economic burden as well as less radiation exposure absorbed.

  9. Fast image interpolation via random forests.

    PubMed

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  10. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  11. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  12. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  13. Application of Gaussian Elimination to Determine Field Components within Unmeasured Regions in the UCN τ Trap

    NASA Astrophysics Data System (ADS)

    Felkins, Joseph; Holley, Adam

    2017-09-01

    Determining the average lifetime of a neutron gives information about the fundamental parameters of interactions resulting from the charged weak current. It is also an input for calculations of the abundance of light elements in the early cosmos, which are also directly measured. Experimentalists have devised two major approaches to measure the lifespan of the neutron, the beam experiment, and the bottle experiment. For the bottle experiment, I have designed a computational algorithm based on a numerical technique that interpolates magnetic field values in between measured points. This algorithm produces interpolated fields that satisfy the Maxwell-Heaviside equations for use in a simulation that will investigate the rate of depolarization in magnetic traps used for bottle experiments, such as the UCN τ experiment at Los Alamos National Lab. I will present how UCN depolarization can cause a systematic error in experiments like UCN τ. I will then describe the technique that I use for the interpolation, and will discuss the accuracy of interpolation for changes with the number of measured points and the volume of the interpolated region. Supported by NSF Grant 1553861.

  14. Adaptive color demosaicing and false color removal

    NASA Astrophysics Data System (ADS)

    Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria

    2010-04-01

    Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.

  15. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  16. Fluorescent carbon nanoparticles derived from natural materials of mango fruit for bio-imaging probes

    NASA Astrophysics Data System (ADS)

    Jeong, Chan Jin; Roy, Arup Kumer; Kim, Sung Han; Lee, Jung-Eun; Jeong, Ji Hoon; Insik; Park, Sung Young

    2014-11-01

    Water soluble fluorescent carbon nanoparticles (FCP) obtained from a single natural source, mango fruit, were developed as unique materials for non-toxic bio-imaging with different colors and particle sizes. The prepared FCPs showed blue (FCP-B), green (FCP-G) and yellow (FCP-Y) fluorescence, derived by the controlled carbonization method. The FCPs demonstrated hydrodynamic diameters of 5-15 nm, holding great promise for clinical applications. The biocompatible FCPs demonstrated great potential in biological fields through the results of in vitro imaging and in vivo biodistribution. Using intravenously administered FCPs with different colored particles, we precisely defined the clearance and biodistribution, showing rapid and efficient urinary excretion for safe elimination from the body. These findings therefore suggest the promising possibility of using natural sources for producing fluorescent materials.Water soluble fluorescent carbon nanoparticles (FCP) obtained from a single natural source, mango fruit, were developed as unique materials for non-toxic bio-imaging with different colors and particle sizes. The prepared FCPs showed blue (FCP-B), green (FCP-G) and yellow (FCP-Y) fluorescence, derived by the controlled carbonization method. The FCPs demonstrated hydrodynamic diameters of 5-15 nm, holding great promise for clinical applications. The biocompatible FCPs demonstrated great potential in biological fields through the results of in vitro imaging and in vivo biodistribution. Using intravenously administered FCPs with different colored particles, we precisely defined the clearance and biodistribution, showing rapid and efficient urinary excretion for safe elimination from the body. These findings therefore suggest the promising possibility of using natural sources for producing fluorescent materials. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr04805a

  17. The response of water quality variation in Poyang Lake (Jiangxi, People's Republic of China) to hydrological changes using historical data and DOM fluorescence.

    PubMed

    Yao, Xin; Wang, Shengrui; Ni, Zhaokui; Jiao, Lixin

    2015-02-01

    Poyang Lake is a unique wetland system that has evolved in response to natural seasonal fluctuations in water levels. To better characterize the response of water quality to hydrological variation, historical data were analyzed in combination with dissolved organic matter (DOM) fluorescence samplings conducted in situ. Historical data showed that long-term changes in water quality are mainly controlled by the sewage inputs to Poyang Lake. Monthly changes in water quality recorded during 2008 and 2012 suggest that water level may be the most important factor for water quality during a hydrological year. DOM fluorescence samples were identified as three humic-like components (C1, C2, and C3) and a protein-like component (C4). These obvious compositional changes in DOM fluorescence were considered to be related to the hydrodynamic differences controlled by water regimen. Principal component analysis (PCA) showed higher C1 and C2 signals during a normal season than the wet season, whereas C3 was lower, and C4 was higher in the dry season than in the wet or normal seasons. From the open lake to the Yangtze River mouth, increased C3 component carried by backflows of the Yangtze River to the lake resulted in these unique variations of PCA factor 2 scores during September. These obvious compositional changes in DOM fluorescence were considered to be related to the hydrodynamic differences controlled by water regimen. DOM fluorescence could be a proxy for capturing rapid changes in water quality and thereby provide an early warning signal for the quality of water supply.

  18. Interpolation schemes for peptide rearrangements.

    PubMed

    Bauer, Marianne S; Strodel, Birgit; Fejer, Szilard N; Koslover, Elena F; Wales, David J

    2010-02-07

    A variety of methods (in total seven) comprising different combinations of internal and Cartesian coordinates are tested for interpolation and alignment in connection attempts for polypeptide rearrangements. We consider Cartesian coordinates, the internal coordinates used in CHARMM, and natural internal coordinates, each of which has been interfaced to the OPTIM code and compared with the corresponding results for united-atom force fields. We show that aligning the methylene hydrogens to preserve the sign of a local dihedral angle, rather than minimizing a distance metric, provides significant improvements with respect to connection times and failures. We also demonstrate the superiority of natural coordinate methods in conjunction with internal alignment. Checking the potential energy of the interpolated structures can act as a criterion for the choice of the interpolation coordinate system, which reduces failures and connection times significantly.

  19. Spatiotemporal Interpolation of Elevation Changes Derived from Satellite Altimetry for Jakobshavn Isbrae, Greenland

    NASA Technical Reports Server (NTRS)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sorensen, L. S.; Joughin, I. R.; Davis, C. H.; Krabill, W. B.

    2012-01-01

    Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dHdt) data over the area of interest. The largest dHdt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dHdt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbr, an outlet glacier for which widespread airborne validation data are available from NASAs Airborne Topographic Mapper (ATM). The four methods are ordinary kriging (OK), kriging with external drift (KED), where the spatial pattern of surface velocity is used as a proxy for that of dHdt, and their spatiotemporal equivalents (ST-OK and ST-KED).

  20. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  1. A hierarchical transition state search algorithm

    NASA Astrophysics Data System (ADS)

    del Campo, Jorge M.; Köster, Andreas M.

    2008-07-01

    A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.

  2. Integrating TITAN2D Geophysical Mass Flow Model with GIS

    NASA Astrophysics Data System (ADS)

    Namikawa, L. M.; Renschler, C.

    2005-12-01

    TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.

  3. Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Kordy, Michal Adam

    The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.

  4. Interpolation algorithm for asynchronous ADC-data

    NASA Astrophysics Data System (ADS)

    Bramburger, Stefan; Zinke, Benny; Killat, Dirk

    2017-09-01

    This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  5. Objective Interpolation of Scatterometer Winds

    NASA Technical Reports Server (NTRS)

    Tang, Wenquing; Liu, W. Timothy

    1996-01-01

    Global wind fields are produced by successive corrections that use measurements by the European Remote Sensing Satellite (ERS-1) scatterometer. The methodology is described. The wind fields at 10-meter height provided by the European Center for Medium-Range Weather Forecasting (ECMWF) are used to initialize the interpolation process. The interpolated wind field product ERSI is evaluated in terms of its improvement over the initial guess field (ECMWF) and the bin-averaged ERS-1 wind field (ERSB). Spatial and temporal differences between ERSI, ECMWF and ERSB are presented and discussed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, John H.; Belcourt, Kenneth Noel

    Completion of the CASL L3 milestone THM.CFD.P6.03 provides a tabular material properties capability to the Hydra code. A tabular interpolation package used in Sandia codes was modified to support the needs of multi-phase solvers in Hydra. Use of the interface is described. The package was released to Hydra under a government use license. A dummy physics was created in Hydra to prototype use of the interpolation routines. Finally, a test using the dummy physics verifies the correct behavior of the interpolation for a test water table. 3

  7. Learning receptor positions from imperfectly known motions

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1990-01-01

    An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.

  8. Pricing and simulation for real estate index options: Radial basis point interpolation

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  9. A robust interpolation procedure for producing tidal current ellipse inputs for regional and coastal ocean numerical models

    NASA Astrophysics Data System (ADS)

    Byun, Do-Seong; Hart, Deirdre E.

    2017-04-01

    Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.

  10. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  11. Hydrodynamic & Transport Properties of Dirac Materials in the Quantum Limit

    NASA Astrophysics Data System (ADS)

    Gochan, Matthew; Bedell, Kevin

    Dirac materials are a versatile class of materials in which an abundance of unique physical phenomena can be observed. Such materials are found in all dimensions, with the shared property that their low-energy fermionic excitations behave as massless Dirac fermions and are therefore governed by the Dirac equation. The most popular Dirac material, its two dimensional version in graphene, is the focus of this work. We seek a deeper understanding of the interactions in the quantum limit within graphene. Specifically, we derive hydrodynamic and transport properties, such as the conductivity, viscosity, and spin diffusion, in the low temperature regime where electron-electron scattering is dominant. To conclude, we look at the so-called universal lower bound conjectured by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence for the ratio of shear viscosity to entropy density ratio. The lower bound, given by η / s >= ℏ / (4 πkB) , is supposedly obeyed by all quantum fluids. This leads us to ask whether or not graphene can be considered a quantum fluid and perhaps a ''nearly perfect fluid''(NPF) if this is the case, is it possible to find a violation of this bound at low temperatures.

  12. View-interpolation of sparsely sampled sinogram using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Lee, Jongha; Cho, Suengryong

    2017-02-01

    Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

  13. The analysis of decimation and interpolation in the linear canonical transform domain.

    PubMed

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  14. Fitting Curves by Fractal Interpolation: AN Application to the Quantification of Cognitive Brain Processes

    NASA Astrophysics Data System (ADS)

    Navascues, M. A.; Sebastian, M. V.

    Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.

  15. Dynamic graphs, community detection, and Riemannian geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun

    A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less

  16. Definition and verification of a complex aircraft for aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1986-01-01

    Techniques are reviewed which are of value in CAD/CAM CFD studies of the geometries of new fighter aircraft. In order to refine the computations of the flows to take advantage of the computing power available from supercomputers, it is often necessary to interpolate the geometry of the mesh selected for the numerical analysis of the aircraft shape. Interpolating the geometry permits a higher level of detail in calculations of the flow past specific regions of a design. A microprocessor-based mathematics engine is described for fast image manipulation and rotation to verify that the interpolated geometry will correspond to the design geometry in order to ensure that the flow calculations will remain valid through the interpolation. Applications of the image manipulation system to verify geometrical representations with wire-frame and shaded-surface images are described.

  17. New families of interpolating type IIB backgrounds

    NASA Astrophysics Data System (ADS)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  18. NTS radiological assessment project: comparison of delta-surface interpolation with kriging for the Frenchman Lake region of area 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, T.A. Jr.

    The primary objective of this report is to compare the results of delta surface interpolation with kriging on four large sets of radiological data sampled in the Frenchman Lake region at the Nevada Test Site. The results of kriging, described in Barnes, Giacomini, Reiman, and Elliott, are very similar to those using the delta surface interpolant. The other topic studied is in reducing the number of sample points and obtaining results similar to those using all of the data. The positive results here suggest that great savings of time and money can be made. Furthermore, the delta surface interpolant ismore » viewed as a contour map and as a three dimensional surface. These graphical representations help in the analysis of the large sets of radiological data.« less

  19. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  20. Spatial Interpolation of Fine Particulate Matter Concentrations Using the Shortest Wind-Field Path Distance

    PubMed Central

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197

  1. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

    PubMed

    Li, Longxiang; Gong, Jianhua; Zhou, Jieping

    2014-01-01

    Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

  2. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    PubMed

    Liu, Wei; Du, Peijun; Wang, Dongchen

    2015-01-01

    One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  3. Programming an Artificial Neural Network Tool for Spatial Interpolation in GIS - A Case Study for Indoor Radio Wave Propagation of WLAN.

    PubMed

    Sen, Alper; Gümüsay, M Umit; Kavas, Aktül; Bulucu, Umut

    2008-09-25

    Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN.

  4. Programming an Artificial Neural Network Tool for Spatial Interpolation in GIS - A Case Study for Indoor Radio Wave Propagation of WLAN

    PubMed Central

    Şen, Alper; Gümüşay, M. Ümit; Kavas, Aktül; Bulucu, Umut

    2008-01-01

    Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN. PMID:27873854

  5. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network.

    PubMed

    Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao

    2017-12-26

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.

  6. Absolute wind velocities in the lower thermosphere of Venus using infrared heterodyne spectroscopy

    NASA Technical Reports Server (NTRS)

    Goldstein, Jeffrey J.; Mumma, Michael J.; Kostiuk, Theodor; Deming, Drake; Espenak, Fred; Zipoy, David

    1991-01-01

    NASA's IR Telescope Facility and the McMath Solar Telescope have yielded absolute wind velocities in the Venus thermosphere for December 1985 to March 1987 with sufficient spatial resolution for circulation model discrimination. A qualitative analysis of beam-integrated winds indicates subsolar-to-antisolar circulation in the lower thermosphere; horizontal wind velocity was derived from a two-parameter model wind field of subsolar-antisolar and zonal components. A unique model fit common to all observing periods possessed 120 m/sec subsolar-antisolar and 25 m/sec zonal retrograde components, consistent with the Bougher et al. (1986, 1988) hydrodynamical models for 110 km.

  7. Tracer attenuation in groundwater

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Vladimir

    2011-12-01

    The self-purifying capacity of aquifers strongly depends on the attenuation of waterborne contaminants, i.e., irreversible loss of contaminant mass on a given scale as a result of coupled transport and transformation processes. A general formulation of tracer attenuation in groundwater is presented. Basic sensitivities of attenuation to macrodispersion and retention are illustrated for a few typical retention mechanisms. Tracer recovery is suggested as an experimental proxy for attenuation. Unique experimental data of tracer recovery in crystalline rock compare favorably with the theoretical model that is based on diffusion-controlled retention. Non-Fickian hydrodynamic transport has potentially a large impact on field-scale attenuation of dissolved contaminants.

  8. Technology requirements and readiness for very large vehicles

    NASA Technical Reports Server (NTRS)

    Conner, D. W.

    1979-01-01

    Common concerns of very large vehicles in the areas of economics, transportation system interfaces and operational problems were reviewed regarding their influence on vehicle configurations and technology. Fifty-four technology requirements were identified which are judged to be unique, or particularly critical, to very large vehicles. The requirements were about equally divided among the four general areas of aero/hydrodynamics, propulsion and acoustics, structures, and vehicle systems and operations. The state of technology readiness was judged to be poor to fair for slightly more than one half of the requirements. In the classic disciplinary areas, the state of technology readiness appears to be more advanced than for vehicle systems and operations.

  9. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  10. Real-time image-based B-mode ultrasound image simulation of needles using tensor-product interpolation.

    PubMed

    Zhu, Mengchen; Salcudean, Septimiu E

    2011-07-01

    In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.

  11. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  12. Strength of visual interpolation depends on the ratio of physically specified to total edge length.

    PubMed

    Shipley, T F; Kellman, P J

    1992-07-01

    We report four experiments in which the strength of edge interpolation in illusory figure displays was tested. In Experiment 1, we investigated the relative contributions of the lengths of luminance-specified edges and the gaps between them to perceived boundary clarity as measured by using a magnitude estimation procedure. The contributions of these variables were found to be best characterized by a ratio of the length of luminance-specified contour to the length of the entire edge (specified plus interpolated edge). Experiment 2 showed that this ratio predicts boundary clarity for a wide range of ratio values and display sizes. There was no evidence that illusory figure boundaries are clearer in displays with small gaps than they are in displays with larger gaps and equivalent ratios. In Experiment 3, using a more sensitive pairwise comparison paradigm, we again found no such effect. Implications for boundary interpolation in general, including perception of partially occluded objects, are discussed. The dependence of interpolation on the ratio of physically specified edges to total edge length has the desirable ecological consequence that unit formation will not change with variations in viewing distance.

  13. Traffic volume estimation using network interpolation techniques.

    DOT National Transportation Integrated Search

    2013-12-01

    Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...

  14. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  15. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  16. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  17. Edge directed image interpolation with Bamberger pyramids

    NASA Astrophysics Data System (ADS)

    Rosiles, Jose Gerardo

    2005-08-01

    Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.

  18. A proxy for variance in dense matching over homogeneous terrain

    NASA Astrophysics Data System (ADS)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low variance in intensity, the topography was reconstructed entirely. This indicates that to a large extent interpolation was applied. To assess this amount of interpolation processing is done with imagery which is gradually downgraded. Through linking these products with the variance indicator (SNR) this results in a quantitative relation of the interpolation influence onto the topography estimation in respect to contrast. Our proposed method is capable of providing a clear indication of variance in reconstructions from UAV photogrammetry. This indicator has a practical advantage, as it can be implemented before the computational intensive matching phase. As such an acquired dataset can be tested in the field. If an area with too little contrast is identified, camera settings can be adjusted for a new flight, or additional measurements can be done through traditional means.

  19. Evaluation on the Presence of Nano Silver Particle in Improving a Conventional Water-based Drilling Fluid

    NASA Astrophysics Data System (ADS)

    Husin, H.; Ahmad, N.; Jamil, N.; Chyuan, O. H.; Roslan, A.

    2018-05-01

    Worldwide demand in oil and gas energy consumption has been driving many of oil and gas companies to explore new oil and gas resource field in an ultra-deep water environment. As deeper well is drilled, more problems and challenges are expected. The successful of drilling operation is highly dependent on properties of drilling fluids. As a way to operate drilling in challenging and extreme surroundings, nanotechnology with their unique properties is employed. Due to unique physicochemical, electrical, thermal, hydrodynamic properties and exceptional interaction potential of nanomaterials, nanoparticles are considered to be the most promising material of choice for smart fluid design for oil and gas field application. Throughout this paper, the effect of nano silver particle in improving a conventional water based drilling fluid was evaluated. Results showed that nano silver gave a significant improvement to the conventional water based drilling fluid in terms of its rheological properties and filtration test performance.

  20. Review of Combustion-acoustic Instabilities

    NASA Technical Reports Server (NTRS)

    Oyediran, Ayo; Darling, Douglas; Radhakrishnan, Krishnan

    1995-01-01

    Combustion-acoustic instabilities occur when the acoustic energy increase due to the unsteady heat release of the flame is greater than the losses of acoustic energy from the system. The problem of combustion-acoustic instability is a concern in many devices for various reasons, as each device may have a unique mechanism causing unsteady heat release rates and many have unique boundary conditions. To accurately predict and quantify combustion-acoustic stabilities, the unsteady heat release rate and boundary conditions need to be accurately determined. The present review brings together work performed on a variety of practical combustion devices. Many theoretical and experimental investigations of the unsteady heat release rate have been performed, some based on perturbations in the fuel delivery system particularly for rocket instabilities, while others are based on hydrodynamic processes as in ramjet dump combustors. The boundary conditions for rocket engines have been analyzed and measured extensively. However, less work has been done to measure acoustic boundary conditions in many other combustion systems.

  1. Implementation of higher-order vertical finite elements in ISSM v4.13 for improved ice sheet flow modeling over paleoclimate timescales

    NASA Astrophysics Data System (ADS)

    Cuzzone, Joshua K.; Morlighem, Mathieu; Larour, Eric; Schlegel, Nicole; Seroussi, Helene

    2018-05-01

    Paleoclimate proxies are being used in conjunction with ice sheet modeling experiments to determine how the Greenland ice sheet responded to past changes, particularly during the last deglaciation. Although these comparisons have been a critical component in our understanding of the Greenland ice sheet sensitivity to past warming, they often rely on modeling experiments that favor minimizing computational expense over increased model physics. Over Paleoclimate timescales, simulating the thermal structure of the ice sheet has large implications on the modeled ice viscosity, which can feedback onto the basal sliding and ice flow. To accurately capture the thermal field, models often require a high number of vertical layers. This is not the case for the stress balance computation, however, where a high vertical resolution is not necessary. Consequently, since stress balance and thermal equations are generally performed on the same mesh, more time is spent on the stress balance computation than is otherwise necessary. For these reasons, running a higher-order ice sheet model (e.g., Blatter-Pattyn) over timescales equivalent to the paleoclimate record has not been possible without incurring a large computational expense. To mitigate this issue, we propose a method that can be implemented within ice sheet models, whereby the vertical interpolation along the z axis relies on higher-order polynomials, rather than the traditional linear interpolation. This method is tested within the Ice Sheet System Model (ISSM) using quadratic and cubic finite elements for the vertical interpolation on an idealized case and a realistic Greenland configuration. A transient experiment for the ice thickness evolution of a single-dome ice sheet demonstrates improved accuracy using the higher-order vertical interpolation compared to models using the linear vertical interpolation, despite having fewer degrees of freedom. This method is also shown to improve a model's ability to capture sharp thermal gradients in an ice sheet particularly close to the bed, when compared to models using a linear vertical interpolation. This is corroborated in a thermal steady-state simulation of the Greenland ice sheet using a higher-order model. In general, we find that using a higher-order vertical interpolation decreases the need for a high number of vertical layers, while dramatically reducing model runtime for transient simulations. Results indicate that when using a higher-order vertical interpolation, runtimes for a transient ice sheet relaxation are upwards of 5 to 7 times faster than using a model which has a linear vertical interpolation, and this thus requires a higher number of vertical layers to achieve a similar result in simulated ice volume, basal temperature, and ice divide thickness. The findings suggest that this method will allow higher-order models to be used in studies investigating ice sheet behavior over paleoclimate timescales at a fraction of the computational cost than would otherwise be needed for a model using a linear vertical interpolation.

  2. A Framework for the Ecogeomorphological Modelling of the Macquarie Marshes, Australia

    NASA Astrophysics Data System (ADS)

    Rodriguez, J. F.; Seoane Salazar, M.; Sandi Rojas, S.; Saco, P. M.; Riccardi, G.; Saintilan, N.; Wen, L.

    2014-12-01

    The Macquarie Marshes is a system of permanent and semi-permanent marshes, swamps and lagoons interconnected by braided channels. The Marshes are located in the semi-arid region in north western NSW, Australia, and constitute part of the northern Murray-Darling Basin. The wetland complex serves as nesting place and habitat for many species of water birds, fish, frogs and crustaceans, and portions of the Marshes was listed as internationally important under the Ramsar Convention. Over the last four decades, some of the wetlands have undergone degradation, which has been attributed to flow abstraction and regulation at Burrendong Dam upstream of the marshes. Among the many characteristics that make this wetland system unique is the occurrence of channel breakdown and channel avulsion, which are associated with decline of river flow in the downstream direction typical of dryland streams. Decrease in river flow can lead to sediment deposition, decrease in channel capacity, vegetative invasion of the channel, overbank flows, and ultimately result in channel breakdown and changes in marsh formation. A similar process on established marshes may also lead to channel avulsion and marsh abandonment. All the previous geomorphological evolution processes have an effect on the established ecosystem, which will produce feedbacks on the hydrodynamics of the system and affect the geomorphology in return. In order to simulate the complex dynamics of the marshes we have developed an ecogeomorphological framework that combines hydrodynamic, vegetation and channel evolution modules. The hydrodynamic simulation provides spatially distributed values of inundation extent, duration, depth and recurrence to drive a vegetation model based on species preference to hydraulic conditions. It also provides velocities and shear stresses to assess geomorphological changes. Regular updates of stream network, floodplain surface elevations and vegetation coverage provide feedbacks to the hydrodynamic model. We perform preliminary tests by running continuous simulation over several years and compare the results to existing hydrological, vegetation and geomorphological data to assess the model capabilities and limitations. We also analyse the effects of the implementation of a number of water management strategies.

  3. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    NASA Astrophysics Data System (ADS)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.

  4. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-01

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  5. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.

    PubMed

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-21

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  6. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  7. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  8. DARHT Axis-I Diode Simulations II: Geometrical Scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl A. Jr.

    2012-06-14

    Flash radiography of large hydrodynamic experiments driven by high explosives is a venerable diagnostic technique in use at many laboratories. Many of the largest hydrodynamic experiments study mockups of nuclear weapons, and are often called hydrotests for short. The dual-axis radiography for hydrodynamic testing (DARHT) facility uses two electron linear-induction accelerators (LIA) to produce the radiographic source spots for perpendicular views of a hydrotest. The first of these LIAs produces a single pulse, with a fixed {approx}60-ns pulsewidth. The second axis LIA produces as many as four pulses within 1.6-{micro}s, with variable pulsewidths and separation. There are a wide varietymore » of hydrotest geometries, each with a unique radiographic requirement, so there is a need to adjust the radiographic dose for the best images. This can be accomplished on the second axis by simply adjusting the pulsewidths, but is more problematic on the first axis. Changing the beam energy or introducing radiation attenuation also changes the spectrum, which is undesirable. Moreover, using radiation attenuation introduces significant blur, increasing the effective spot size. The dose can also be adjusted by changing the beam kinetic energy. This is a very sensitive method, because the dose scales as the {approx}2.8 power of the energy, but it would require retuning the accelerator. This leaves manipulating the beam current as the best means for adjusting the dose, and one way to do this is to change the size of the cathode. This method has been proposed, and is being tested. This article describes simulations undertaken to develop scaling laws for use as design tools in changing the Axis-1 beam current by changing the cathode size.« less

  9. A model study of the coupled water quality and hydrodynamics in YuQiao Reservoir of Haihe River Basin, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Liu, X.; Liu, J.; Peng, W.; Wang, Y.

    2007-05-01

    In recent years, eutrophication has become one of the most serious of global water pollution problems, especially in reservoirs, which is menacing the security of domestic water supplies. As the unique drinking water source of Tianjin within the Haihe River basin of Hebei Province, China, YuQiao Reservoir has been polluted and its eutrophic state is serious. To make clear the physical and chemical relationship between transport and transformation of the polluted water, a model package was developed to compute the hydrodynamic field and mass transport processes including total nitrogen (TN) and total phosphorus (TP) for YuQiao Reservoir. The hydrodynamic model was driven by observed winds and daily measured flow data to simulate the seasonal water cycle of the reservoir. The mass transport and transformation processes of TN and TP was based on the unsteady diffusion equations, driven by observed meteorological forcings and external loadings, with the fluxes through the bottom of the reservoir, plant (algal) photosynthesis, and respiration as internal sources and sinks. The solution of these equations uses the finite volume method and alternating direction implicit (ADI) scheme. The model was calibrated and verified by using the data observed from YuQiao Reservoir in two different years. The results showed that in YuQiao Reservoir, the wind-driven current is an important style of lake current, while the water quality is decreasing from east to west because of the external polluted loadings. There was good agreement between the simulated and measured values. Advection is the main process driving the water quality impacts from the inflow river, and diffusion and biochemical processes dominate in center of the reservoir. So it is necessary to build a pre-pond to reduce the external loadings into the reservoir.

  10. Interpolation Hermite Polynomials For Finite Element Method

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given.

  11. Incorporating Linear Synchronous Transit Interpolation into the Growing String Method: Algorithm and Applications.

    PubMed

    Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin

    2011-12-13

    The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).

  12. Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.

    PubMed

    Pansky, Ainat; Tenenboim, Einat

    2011-01-01

    In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.

  13. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  14. Rapidity distribution of photons from an anisotropic quark-gluon plasma

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Lusaka; Roy, Pradip

    2010-05-01

    We calculate rapidity distribution of photons due to Compton and annihilation processes from quark gluon plasma with pre-equilibrium momentum-space anisotropy. We also include contributions from hadronic matter with late-stage transverse expansion. A phenomenological model has been used for the time evolution of hard momentum scale, phard(τ), and anisotropy parameter, ξ(τ). As a result of pre-equilibrium momentum-space anisotropy, we find significant modification of photons rapidity distribution. For example, with the fixed initial condition (FIC) free-streaming (δ=2) interpolating model we observe significant enhancement of photon rapidity distribution at fixed pT, where as for FIC collisionally broadened (δ=2/3) interpolating model the yield increases till y~1. Beyond that suppression is observed. With fixed final multiplicity (FFM) free-streaming interpolating model we predict enhancement of photon yield which is less than the case of FIC. Suppression is always observed for FFM collisionally broadened interpolating model.

  15. Studying the Global Bifurcation Involving Wada Boundary Metamorphosis by a Method of Generalized Cell Mapping with Sampling-Adaptive Interpolation

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng

    In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.

  16. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    PubMed

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  17. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  18. Peatland Structural Controls on Spring Distribution

    NASA Astrophysics Data System (ADS)

    Hare, D. K.; Boutt, D. F.; Hackman, A. M.; Davenport, G.

    2013-12-01

    The species richness of wetland ecosystems' are sustained by the presence of discrete groundwater discharge, or springs. Springs provide thermal refugia and a source of fresh water inflow crucial for survival of many wetland species. The subsurface drivers that control the spatial distribution of surficial springs throughout peatland complexes are poorly understood due to the many challenges peatlands pose for hydrologic characterization, such as the internal heterogeneities, soft, dynamic substrate, and low gradient of peat drainage. This has previously made it difficult to collect spatial data required for restoration projects that seek to support spring obligate and thermally stressed species such as trout. Tidmarsh Farms is a 577-acre site in Southeastern Massachusetts where 100+ years of cranberry farming has significantly altered the original peatland hydrodynamics and ecology. Farming practices such as the regular application of sand, straightening of the main channel, and addition of drainage ditches has strongly degraded this peatland ecosystem. Our research has overlain non-invasive geophysical, thermal, and water isotopic data from the Tidmarsh Farms peatland to provide a detailed visualization of how subsurface peat structure and spring patterns correlate. Ground penetrating radar (GPR) has proven particularly useful in characterizing internal peat structure and the mineral soil interface beneath peatlands, we interpolate the peatland basin at a large scale (1 km2) and compare this 3-D surface to the locations of springs on the peat platform. Springs, expressed as cold anomalies in summer and warm anomalies in winter, were specifically located by combining fiber-optic and infrared thermal surveys, utilizing the numerous relic agricultural drainage ditches as a sampling advantage. Isotopic signatures of the spring locations are used to distinguish local and regional discharge, differences that can be explained in part by the peat basin structure delineated with GPR. The study expands our understanding of complex peat systems and will be used to inform wetland restoration based on hydrodynamic processes; yielding a more successful, resilient restoration and desired ecologic function. Our research demonstrates how the use of GPR in combination with thermal imagery and isotopic analysis can help characterize degraded peatlands, informing a process-based approach to ecological restoration of the site with the ability to monitor changes through time.

  19. Forecasting Flood Hazard on Real Time: Implementation of a New Surrogate Model for Hydrometeorological Events in an Andean Watershed.

    NASA Astrophysics Data System (ADS)

    Contreras Vargas, M. T.; Escauriaza, C. R.; Westerink, J. J.

    2017-12-01

    In recent years, the occurrence of flash floods and landslides produced by hydrometeorological events in Andean watersheds has had devastating consequences in urban and rural areas near the mountains. Two factors have hindered the hazard forecast in the region: 1) The spatial and temporal variability of climate conditions, which reduce the time range that the storm features can be predicted; and 2) The complexity of the basin morphology that characterizes the Andean region, and increases the velocity and the sediment transport capacity of flows that reach urbanized areas. Hydrodynamic models have become key tools to assess potential flood risks. Two-dimensional (2D) models based on the shallow-water equations are widely used to determine with high accuracy and resolution, the evolution of flow depths and velocities during floods. However, the high-computational requirements and long computational times have encouraged research to develop more efficient methodologies for predicting the flood propagation on real time. Our objective is to develop new surrogate models (i.e. metamodeling) to quasi-instantaneously evaluate floods propagation in the Andes foothills. By means a small set of parameters, we define storms for a wide range of meteorological conditions. Using a 2D hydrodynamic model coupled in mass and momentum with the sediment concentration, we compute on high-fidelity the propagation of a flood set. Results are used as a database to perform sophisticated interpolation/regression, and approximate efficiently the flow depth and velocities in critical points during real storms. This is the first application of surrogate models to evaluate flood propagation in the Andes foothills, improving the efficiency of flood hazard prediction. The model also opens new opportunities to improve early warning systems, helping decision makers to inform citizens, enhancing the reslience of cities near mountain regions. This work has been supported by CONICYT/FONDAP grant 15110017, and by the Vice Chancellor of Research of the Pontificia Universidad Catolica de Chile, through the Research Internationalization Grant, PUC1566 funded by MINEDUC.

  20. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network

    PubMed Central

    Zhou, Shenglu; Su, Quanlong; Yi, Haomin

    2017-01-01

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363

  1. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  2. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

    PubMed

    Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

    2014-04-01

    Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.

  3. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  4. Generation of real-time mode high-resolution water vapor fields from GPS observations

    NASA Astrophysics Data System (ADS)

    Yu, Chen; Penna, Nigel T.; Li, Zhenhong

    2017-02-01

    Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.

  5. Accuracy of stream habitat interpolations across spatial scales

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2013-01-01

    Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.

  6. 28 CFR 16.103 - Exemption of the INTERPOL-United States National Central Bureau (INTERPOL-USNCB) System.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accounting disclosures would place the subject of an investigation on notice that he is under investigation... OF JUSTICE PRODUCTION OR DISCLOSURE OF MATERIAL OR INFORMATION Exemption of Records Systems Under the...

  7. A practical implementation of wave front construction for 3-D isotropic media

    NASA Astrophysics Data System (ADS)

    Chambers, K.; Kendall, J.-M.

    2008-06-01

    Wave front construction (WFC) methods are a useful tool for tracking wave fronts and are a natural extension to standard ray shooting methods. Here we describe and implement a simple WFC method that is used to interpolate wavefield properties throughout a 3-D heterogeneous medium. Our approach differs from previous 3-D WFC procedures primarily in the use of a ray interpolation scheme, based on approximating the wave front as a `locally spherical' surface and a `first arrival mode', which reduces computation times, where only first arrivals are required. Both of these features have previously been included in 2-D WFC algorithms; however, until now they have not been extended to 3-D systems. The wave front interpolation scheme allows for rays to be traced from a nearly arbitrary distribution of take-off angles, and the calculation of derivatives with respect to take-off angles is not required for wave front interpolation. However, in regions of steep velocity gradient, the locally spherical approximation is not valid, and it is necessary to backpropagate rays to a sufficiently homogenous region before interpolation of the new ray. Our WFC technique is illustrated using a realistic velocity model, based on a North Sea oil reservoir. We examine wavefield quantities such as traveltimes, ray angles, source take-off angles and geometrical spreading factors, all of which are interpolated on to a regular grid. We compare geometrical spreading factors calculated using two methods: using the ray Jacobian and by taking the ratio of a triangular area of wave front to the corresponding solid angle at the source. The results show that care must be taken when using ray Jacobians to calculate geometrical spreading factors, as the poles of the source coordinate system produce unreliable values, which can be spread over a large area, as only a few initial rays are traced in WFC. We also show that the use of the first arrival mode can reduce computation time by ~65 per cent, with the accuracy of the interpolated traveltimes, ray angles and source take-off angles largely unchanged. However, the first arrival mode does lead to inaccuracies in interpolated angles near caustic surfaces, as well as small variations in geometrical spreading factors for ray tubes that have passed through caustic surfaces.

  8. GIS interpolations of witness tree records (1839-1866) for northern Wisconsin at multiple scales

    USGS Publications Warehouse

    He, H.S.; Mladenoff, D.J.; Sickley, T.A.; Guntenspergen, G.R.

    2000-01-01

    To construct forest landscape of pre-European settlement periods, we developed a GIS interpolation approach to convert witness tree records of the U.S. General Land Office (GLO) survey from point to polygon data, which better described continuously distributed vegetation. The witness tree records (1839-1866) were processed for a 3-million ha landscape in northern Wisconsin, U.S.A. at different scales. We provided implications of processing results at each scale. Compared with traditional GLO mapping that has fixed mapping scales and generalized classifications, our approach allows presettlement forest landscapes to be analysed at the individual species level and reconstructed under various classifications. We calculated vegetation indices including relative density, dominance, and importance value for each species, and quantitatively described the possible outcomes when GLO records are analysed at three different scales (resolution). The 1 x 1-section resolution preserved spatial information but derived the most conservative estimates of species distributions measured in percentage area, which increased at coarser resolutions. Such increases under the 2 x 2-section resolution were in the order of three to four times for the least common species, two to three times for the medium to most common species, and one to two times for the most common or highly contagious species. We marred the distributions of hemlock and sugar maple from the pre-European settlement period based on their witness tree locations and reconstructed presettlement forest landscapes based on species importance values derived for all species. The results provide a unique basis to further study land cover changes occurring after European settlement.

  9. Coupling between shear and bending in the analysis of beam problems: Planar case

    NASA Astrophysics Data System (ADS)

    Shabana, Ahmed A.; Patel, Mohil

    2018-04-01

    The interpretation of invariants, such as curvatures which uniquely define the bending and twist of space curves and surfaces, is fundamental in the formulation of the beam and plate elastic forces. Accurate representations of curve and surface invariants, which enter into the definition of the strain energy equations, is particularly important in the case of large displacement analysis. This paper discusses this important subject in view of the fact that shear and bending are independent modes of deformation and do not have kinematic coupling; this is despite the fact that kinetic coupling may exist. The paper shows, using simple examples, that shear without bending and bending without shear at an arbitrary point and along a certain direction are scenarios that higher-order finite elements (FE) can represent with a degree of accuracy that depends on the order of interpolation and/or mesh size. The FE representation of these two kinematically uncoupled modes of deformation is evaluated in order to examine the effect of the order of the polynomial interpolation on the accuracy of representing these two independent modes. It is also shown in this paper that not all the curvature vectors contribute to bending deformation. In view of the conclusions drawn from the analysis of simple beam problems, the material curvature used in several previous investigations is evaluated both analytically and numerically. The problems associated with the material curvature matrix, obtained using the rotation of the beam cross-section, and the fundamental differences between this material curvature matrix and the Serret-Frenet curvature matrix are discussed.

  10. An approach for delineating drinking water wellhead protection areas at the Nile Delta, Egypt.

    PubMed

    Fadlelmawla, Amr A; Dawoud, Mohamed A

    2006-04-01

    In Egypt, production has a high priority. To this end protecting the quality of the groundwater, specifically when used for drinking water, and delineating protection areas around the drinking water wellheads for strict landuse restrictions is essential. The delineation methods are numerous; nonetheless, the uniqueness of the hydrogeological, institutional as well as social conditions in the Nile Delta region dictate a customized approach. The analysis of the hydrological conditions and land ownership at the Nile Delta indicates the need for an accurate methodology. On the other hand, attempting to calculate the wellhead protected areas around each of the drinking wells (more than 1500) requires data, human resources, and time that exceed the capabilities of the groundwater management agency. Accordingly, a combination of two methods (simplified variable shapes and numerical modeling) was adopted. Sensitivity analyses carried out using hypothetical modeling conditions have identified the pumping rate, clay thickness, hydraulic gradient, vertical conductivity of the clay, and the hydraulic conductivity as the most significant parameters in determining the dimensions of the wellhead protection areas (WHPAs). Tables of sets of WHPAs dimensions were calculated using synthetic modeling conditions representing the most common ranges of the significant parameters. Specific WHPA dimensions can be calculated by interpolation, utilizing the produced tables along with the operational and hydrogeological conditions for the well under consideration. In order to simplify the interpolation of the appropriate dimensions of the WHPAs from the calculated tables, an interactive computer program was written. The program accepts the real time data of the significant parameters as its input, and gives the appropriate WHPAs dimensions as its output.

  11. 3D voxel modelling of the marine subsurface: the Belgian Continental Shelf case

    NASA Astrophysics Data System (ADS)

    Hademenos, Vasileios; Kint, Lars; Missiaen, Tine; Stafleu, Jan; Van Lancker, Vera

    2017-04-01

    The need for marine space grows bigger by the year. Dredging, wind farms, aggregate extraction and many other activities take up more space than ever before. As a result, the need for an accurate model that describes the properties of the areas in use is a priority. To address this need a 3D voxel model of the subsurface of the Belgian part of the North Sea has been created in the scope of the Belgian Science Policy project TILES ('Transnational and Integrated Long-term Marine Exploitation Strategies'). Since borehole data in the marine environment are a costly endeavour and therefore relatively scarce, seismic data have been incorporated in order to improve the data coverage. Lithostratigraphic units have been defined and lithoclasses are attributed to the voxels using a stochastic interpolation. As a result each voxel contains a unique value of one of 7 lithological classes (spanning in grain size from clay to gravel) in association with the geological layer it belongs to. In addition other forms of interpolation like sequential indicator simulation have allowed us to calculate the probability occurrence of each lithoclass, thus providing additional info from which the uncertainty of the model can be derived. The resulting 3D voxel model gives a detailed image of the distribution of different sediment types and provides valuable insight on the different geological settings. The voxel model also allows to estimate resource volumes (e.g. the availability of particular sand classes), enabling a more targeted exploitation. The primary information of the model is related to geology, but the model can additionally host any type of information.

  12. Summary and status of the Horizons ephemeris system

    NASA Astrophysics Data System (ADS)

    Giorgini, J.

    2011-10-01

    Since 1996, the Horizons system has provided searchable access to JPL ephemerides for all known solar system bodies, several dozen spacecraft, planetary system barycenters, and some libration points. Responding to 18 400 000 requests from 300 000 unique addresses, the system has recently averaged 420 000 ephemeris requests per month. Horizons is accessed and automated using three interfaces: interactive telnet, web-browser form, and e-mail command-file. Asteroid and comet ephemerides are numerically integrated from JPL's database of initial conditions. This small-body database is updated hourly by a separate process as new measurements and discoveries are reported by the Minor Planet Center and automatically incorporated into new JPL orbit solutions. Ephemerides for other objects are derived by interpolating previously developed solutions whose trajectories have been represented in a file. For asteroids and comets, such files may be dynamically created and transferred to users, effectively recording integrator output. These small-body SPK files may then be interpolated by user software to reproduce the trajectory without duplicating the numerically integrated n-body dynamical model or PPN equations of motion. Other Horizons output is numerical and in the form of plain-text observer, vector, osculating element, or close-approach tables, typically expected be read by other software as input. About one hundred quantities can be requested in various time-scales and coordinate systems. For JPL small-body solutions, this includes statistical uncertainties derived from measurement covariance and state transition matrices. With the exception of some natural satellites, Horizons is consistent with DE405/DE406, the IAU 1976 constants, ITRF93, and IAU2009 rotational models.

  13. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  14. How to design a cartographic continuum to help users to navigate between two topographic styles?

    NASA Astrophysics Data System (ADS)

    Ory, Jérémie; Touya, Guillaume; Hoarau, Charlotte; Christophe, Sidonie

    2018-05-01

    Geoportals and geovisualization tools provide to users various cartographic abstractions that describe differently a geographical space. Our purpose is to be able to design cartographic continuums, i.e. a set of in-between maps allowing users to navigate between two topographic styles. This paper addresses the problem of the interpolation between two topographic abstractions with different styles. We detail our approach in two steps. Firstly, we setup a comparison in order to identify which structural elements of a cartographic abstraction should be interpolated. Secondly, we propose an approach based on two design methods for maps interpolation.

  15. Assessment of interaction-strength interpolation formulas for gold and silver clusters

    NASA Astrophysics Data System (ADS)

    Giarrusso, Sara; Gori-Giorgi, Paola; Della Sala, Fabio; Fabiano, Eduardo

    2018-04-01

    The performance of functionals based on the idea of interpolating between the weak- and the strong-interaction limits the global adiabatic-connection integrand is carefully studied for the challenging case of noble-metal clusters. Different interpolation formulas are considered and various features of this approach are analyzed. It is found that these functionals, when used as a correlation correction to Hartree-Fock, are quite robust for the description of atomization energies, while performing less well for ionization potentials. Future directions that can be envisaged from this study and a previous one on main group chemistry are discussed.

  16. LPV Controller Interpolation for Improved Gain-Scheduling Control Performance

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Kim, SungWan

    2002-01-01

    In this paper, a new gain-scheduling control design approach is proposed by combining LPV (linear parameter-varying) control theory with interpolation techniques. The improvement of gain-scheduled controllers can be achieved from local synthesis of Lyapunov functions and continuous construction of a global Lyapunov function by interpolation. It has been shown that this combined LPV control design scheme is capable of improving closed-loop performance derived from local performance improvement. The gain of the LPV controller will also change continuously across parameter space. The advantages of the newly proposed LPV control is demonstrated through a detailed AMB controller design example.

  17. Antenna pattern interpolation by generalized Whittaker reconstruction

    NASA Astrophysics Data System (ADS)

    Tjonneland, K.; Lindley, A.; Balling, P.

    Whittaker reconstruction is an effective tool for interpolation of band limited data. Whittaker originally introduced the interpolation formula termed the cardinal function as the function that represents a set of equispaced samples but has no periodic components of period less than twice the sample spacing. It appears that its use for reflector antennas was pioneered in France. The method is now a useful tool in the analysis and design of multiple beam reflector antenna systems. A good description of the method has been given by Bucci et al. This paper discusses some problems encountered with the method and their solution.

  18. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  19. Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction.

    PubMed

    Yang, Qi; Zhang, Yanzhu; Zhao, Tiebiao; Chen, YangQuan

    2017-04-04

    Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    PubMed Central

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283

  1. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    PubMed

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  2. Applicability of Various Interpolation Approaches for High Resolution Spatial Mapping of Climate Data in Korea

    NASA Astrophysics Data System (ADS)

    Jo, A.; Ryu, J.; Chung, H.; Choi, Y.; Jeon, S.

    2018-04-01

    The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m) by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA). Automatic Weather System (AWS) and Automated Synoptic Observing System (ASOS) data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478) and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM) with 30 m resolution, inverse distance weighting (IDW), co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.

  3. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  4. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  5. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  6. An Extended Kriging Method to Interpolate Near-Surface Soil Moisture Data Measured by Wireless Sensor Networks

    PubMed Central

    Zhang, Jialin; Li, Xiuhong; Yang, Rongjin; Liu, Qiang; Zhao, Long; Dou, Baocheng

    2017-01-01

    In the practice of interpolating near-surface soil moisture measured by a wireless sensor network (WSN) grid, traditional Kriging methods with auxiliary variables, such as Co-kriging and Kriging with external drift (KED), cannot achieve satisfactory results because of the heterogeneity of soil moisture and its low correlation with the auxiliary variables. This study developed an Extended Kriging method to interpolate with the aid of remote sensing images. The underlying idea is to extend the traditional Kriging by introducing spectral variables, and operating on spatial and spectral combined space. The algorithm has been applied to WSN-measured soil moisture data in HiWATER campaign to generate daily maps from 10 June to 15 July 2012. For comparison, three traditional Kriging methods are applied: Ordinary Kriging (OK), which used WSN data only, Co-kriging and KED, both of which integrated remote sensing data as covariate. Visual inspections indicate that the result from Extended Kriging shows more spatial details than that of OK, Co-kriging, and KED. The Root Mean Square Error (RMSE) of Extended Kriging was found to be the smallest among the four interpolation results. This indicates that the proposed method has advantages in combining remote sensing information and ground measurements in soil moisture interpolation. PMID:28617351

  7. Simulating hydrodynamics and ice cover in Lake Erie using an unstructured grid model

    NASA Astrophysics Data System (ADS)

    Fujisaki-Manome, A.; Wang, J.

    2016-02-01

    An unstructured grid Finite-Volume Coastal Ocean Model (FVCOM) is applied to Lake Erie to simulate seasonal ice cover. The model is coupled with an unstructured-grid, finite-volume version of the Los Alamos Sea Ice Model (UG-CICE). We replaced the original 2-time-step Euler forward scheme in time integration by the central difference (i.e., leapfrog) scheme to assure a neutrally inertial stability. The modified version of FVCOM coupled with the ice model is applied to the shallow freshwater lake in this study using unstructured grids to represent the complicated coastline in the Laurentian Great Lakes and refining the spatial resolution locally. We conducted multi-year simulations in Lake Erie from 2002 to 2013. The results were compared with the observed ice extent, water surface temperature, ice thickness, currents, and water temperature profiles. Seasonal and interannual variation of ice extent and water temperature was captured reasonably, while the modeled thermocline was somewhat diffusive. The modeled ice thickness tends to be systematically thinner than the observed values. The modeled lake currents compared well with measurements obtained from an Acoustic Doppler Current Profiler located in the deep part of the lake, whereas the simulated currents deviated from measurements near the surface, possibly due to the model's inability to reproduce the sharp thermocline during the summer and the lack of detailed representation of offshore wind fields in the interpolated meteorological forcing.

  8. A Novel Approach to Visualizing Dark Matter Simulations.

    PubMed

    Kaehler, R; Hahn, O; Abel, T

    2012-12-01

    In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.

  9. Spatial interpolation quality assessments for soil sensor transect datasets

    USDA-ARS?s Scientific Manuscript database

    Near-ground geophysical soil sensors provide extremely valuable information for precision agriculture applications. Indeed, their readings can be used as proxy for many soil parameters. Typically, leave-one-out (loo) cross-validation (CV) of spatial interpolation of sensor data returns overly optimi...

  10. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    NASA Astrophysics Data System (ADS)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  11. Rtop - an R package for interpolation along the stream network

    NASA Astrophysics Data System (ADS)

    Skøien, J. O.

    2009-04-01

    Rtop - an R package for interpolation along the stream network Geostatistical methods have been used to a limited extent for estimation along stream networks, with a few exceptions(Gottschalk, 1993; Gottschalk, et al., 2006; Sauquet, et al., 2000; Skøien, et al., 2006). Interpolation of runoff characteristics are more complicated than the traditional random variables estimated by geostatistical methods, as the measurements have a more complicated support, and many catchments are nested. Skøien et al. (2006) presented the model Top-kriging which takes these effects into account for interpolation of stream flow characteristics (exemplified by the 100 year flood). The method has here been implemented as a package in the statistical environment R (R Development Core Team, 2004). Taking advantage of the existing methods in R for working with spatial objects, and the extensive possibilities for visualizing the result, this makes it considerably easier to apply the method on new data sets, in comparison to earlier implementation of the method. Gottschalk, L. 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., I. Krasovskaia, E. Leblois, and E. Sauquet. 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Development Core Team. 2004. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Sauquet, E., L. Gottschalk, and E. Leblois. 2000. Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J. O., R. Merz, and G. Blöschl. 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.

  12. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  13. An Immersed Boundary method with divergence-free velocity interpolation and force spreading

    NASA Astrophysics Data System (ADS)

    Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.

    2017-10-01

    The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.

  14. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2014-12-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  15. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  16. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  17. An improved adaptive interpolation clock recovery loop based on phase splitting algorithm for coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya

    2018-01-01

    Traditional clock recovery scheme achieves timing adjustment by digital interpolation, thus recovering the sampling sequence. Based on this, an improved clock recovery architecture joint channel equalization for coherent optical communication system is presented in this paper. The loop is different from the traditional clock recovery. In order to reduce the interpolation error caused by the distortion in the frequency domain of the interpolator and to suppress the spectral mirroring generated by the sampling rate change, the proposed algorithm joint equalization, improves the original interpolator in the loop, along with adaptive filtering, and makes error compensation for the original signals according to the balanced pre-filtering signals. Then the signals are adaptive interpolated through the feedback loop. Furthermore, the phase splitting timing recovery algorithm is adopted in this paper. The time error is calculated according to the improved algorithm when there is no transition between the adjacent symbols, making calculated timing error more accurate. Meanwhile, Carrier coarse synchronization module is placed before the beginning of timing recovery to eliminate the larger frequency offset interference, which effectively adjust the sampling clock phase. In this paper, the simulation results show that the timing error is greatly reduced after the loop is changed. Based on the phase splitting algorithm, the BER and MSE are better than those in the unvaried architecture. In the fiber channel, using MQAM modulation format, after 100 km-transmission of single-mode fiber, especially when ROF(roll-off factor) values tends to 0, the algorithm shows a better clock performance under different ROFs. When SNR values are less than 8, the BER could achieve 10-2 to 10-1 magnitude. Furthermore, the proposed timing recovery is more suitable for the situation with low SNR values.

  18. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  19. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  20. An updated Lagrangian particle hydrodynamics (ULPH) for Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Tu, Qingsong; Li, Shaofan

    2017-11-01

    In this work, we have developed an updated Lagrangian particle hydrodynamics (ULPH) for Newtonian fluid. Unlike the smoothed particle hydrodynamics, the non-local particle hydrodynamics formulation proposed here is consistent and convergence. Unlike the state-based peridynamics, the discrete particle dynamics proposed here has no internal material bond between particles, and it is not formulated with respect to initial or a fixed referential configuration. In specific, we have shown that (1) the non-local update Lagrangian particle hydrodynamics formulation converges to the conventional local fluid mechanics formulation; (2) the non-local updated Lagrangian particle hydrodynamics can capture arbitrary flow discontinuities without any changes in the formulation, and (3) the proposed non-local particle hydrodynamics is computationally efficient and robust.

Top