Sample records for smooth phase interpolated

  1. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  2. First-Order-hold interpolation digital-to-analog converter with application to aircraft simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, W. B.

    1976-01-01

    Those who design piloted aircraft simulations must contend with the finite size and speed of the available digital computer and the requirement for simulation reality. With a fixed computational plant, the more complex the model, the more computing cycle time is required. While increasing the cycle time may not degrade the fidelity of the simulated aircraft dynamics, the larger steps in the pilot cue feedback variables (such as the visual scene cues), may be disconcerting to the pilot. The first-order-hold interpolation (FOHI) digital-to-analog converter (DAC) is presented as a device which offers smooth output, regardless of cycle time. The Laplace transforms of these three conversion types are developed and their frequency response characteristics and output smoothness are compared. The FOHI DAC exhibits a pure one-cycle delay. Whenever the FOHI DAC input comes from a second-order (or higher) system, a simple computer software technique can be used to compensate for the DAC phase lag. When so compensated, the FOHI DAC has (1) an output signal that is very smooth, (2) a flat frequency response in frequency ranges of interest, and (3) no phase error. When the input comes from a first-order system, software compensation may cause the FOHI DAC to perform as an FOHE DAC, which, although its output is not as smooth as that of the FOHI DAC, has a smoother output than that of the ZOH DAC.

  3. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  4. A Comparison of Spatial Analysis Methods for the Construction of Topographic Maps of Retinal Cell Density

    PubMed Central

    Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568

  5. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  6. Hermite-Birkhoff interpolation in the nth roots of unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.

    1980-06-01

    Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.

  7. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  8. Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Damelin, S. B.; Jung, H. S.

    2005-01-01

    For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.

  9. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  10. Catmull-Rom Curve Fitting and Interpolation Equations

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2010-01-01

    Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…

  11. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  12. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  13. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  14. A robust interpolation procedure for producing tidal current ellipse inputs for regional and coastal ocean numerical models

    NASA Astrophysics Data System (ADS)

    Byun, Do-Seong; Hart, Deirdre E.

    2017-04-01

    Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.

  15. Mapping snow depth return levels: smooth spatial modeling versus station interpolation

    NASA Astrophysics Data System (ADS)

    Blanchet, J.; Lehning, M.

    2010-12-01

    For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.

  16. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  17. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  18. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  19. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  20. Power transformations improve interpolation of grids for molecular mechanics interaction energies.

    PubMed

    Minh, David D L

    2018-02-18

    A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  1. Meshless Modeling of Deformable Shapes and their Motion

    PubMed Central

    Adams, Bart; Ovsjanikov, Maks; Wand, Michael; Seidel, Hans-Peter; Guibas, Leonidas J.

    2010-01-01

    We present a new framework for interactive shape deformation modeling and key frame interpolation based on a meshless finite element formulation. Starting from a coarse nodal sampling of an object’s volume, we formulate rigidity and volume preservation constraints that are enforced to yield realistic shape deformations at interactive frame rates. Additionally, by specifying key frame poses of the deforming shape and optimizing the nodal displacements while targeting smooth interpolated motion, our algorithm extends to a motion planning framework for deformable objects. This allows reconstructing smooth and plausible deformable shape trajectories in the presence of possibly moving obstacles. The presented results illustrate that our framework can handle complex shapes at interactive rates and hence is a valuable tool for animators to realistically and efficiently model and interpolate deforming 3D shapes. PMID:24839614

  2. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  3. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  4. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  5. An n -material thresholding method for improving integerness of solutions in topology optimization

    DOE PAGES

    Watts, Seth; Tortorelli, Daniel A.

    2016-04-10

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less

  6. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  7. Subcellular localization for Gram positive and Gram negative bacterial proteins using linear interpolation smoothing model.

    PubMed

    Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2015-12-07

    Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  9. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less

  10. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  11. Registration-based interpolation applied to cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ólafsdóttir, Hildur; Pedersen, Henrik; Hansen, Michael S.; Lyksborg, Mark; Hansen, Mads Fogtmann; Darkner, Sune; Larsen, Rasmus

    2010-03-01

    Various approaches have been proposed for segmentation of cardiac MRI. An accurate segmentation of the myocardium and ventricles is essential to determine parameters of interest for the function of the heart, such as the ejection fraction. One problem with MRI is the poor resolution in one dimension. A 3D registration algorithm will typically use a trilinear interpolation of intensities to determine the intensity of a deformed template image. Due to the poor resolution across slices, such linear approximation is highly inaccurate since the assumption of smooth underlying intensities is violated. Registration-based interpolation is based on 2D registrations between adjacent slices and is independent of segmentations. Hence, rather than assuming smoothness in intensity, the assumption is that the anatomy is consistent across slices. The basis for the proposed approach is the set of 2D registrations between each pair of slices, both ways. The intensity of a new slice is then weighted by (i) the deformation functions and (ii) the intensities in the warped images. Unlike the approach by Penney et al. 2004, this approach takes into account deformation both ways, which gives more robustness where correspondence between slices is poor. We demonstrate the approach on a toy example and on a set of cardiac CINE MRI. Qualitative inspection reveals that the proposed approach provides a more convincing transition between slices than images obtained by linear interpolation. A quantitative validation reveals significantly lower reconstruction errors than both linear and registration-based interpolation based on one-way registrations.

  12. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  13. Soft bilateral filtering volumetric shadows using cube shadow maps

    PubMed Central

    Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang

    2017-01-01

    Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740

  14. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  15. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  16. Center for Space Telemetering and Telecommunications Systems, New Mexico State University

    NASA Technical Reports Server (NTRS)

    Horan, Stephen; DeLeon, Phillip; Borah, Deva; Lyman, Ray

    2002-01-01

    This viewgraph presentation gives an overview of the Center for Space Telemetering and Telecommunications Systems activities at New Mexico State University. Presentations cover the following topics: (1) small satellite communications, including nanosatellite radio and virtual satellite development; (2) modulation and detection studies, including details on smooth phase interpolated keying (SPIK) spectra and highlights of an adaptive turbo multiuser detector; (3) decoupled approaches to nonlinear ISI compensation; (4) space internet testing; (4) optical communication; (5) Linux-based receiver for lightweight optical communications without a laser in space, including software design, performance analysis, and the receiver algorithm; (6) carrier tracking hardware; and (7) subband transforms for adaptive direct sequence spread spectrum receivers.

  17. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  18. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  19. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  20. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  1. Combinations of Earth Orientation Measurements: SPACE94, COMB94, and POLE94

    NASA Technical Reports Server (NTRS)

    Gross, Richard S.

    1996-01-01

    A Kalman filter has been used to combine independent measurements of the Earth's orientation taken by the space-geodetic observing techniques of lunar laser ranging, satellite laser ranging, very long baseline interferometry, and the Global Positioning System. Prior to their combination, the data series were adjusted to have the same bias and rate, the stated uncertainties of the measurements were adjusted, and data points considered to be outliers were deleted. The resulting combination, SPACE94, consists of smoothed, interpolated polar motion and UT1-UTC values spanning October 6, 1976, to January 27, 1995, at 1-day intervals. The Kalman filter was then used to combine the space-geodetic series comprising SPACE94 with two different, independent series of Earth orientation measurements taken by the technique of optical astrometry. Prior to their combination with SPACE94, the bias, rate and annual term of the optical astrometric series were corrected, the stated uncertainties of the measurements were adjusted, and data points considered to be outliers were deleted. The adjusted optical astrometric series were then combined with SPACE94 in two steps: (1) the Bureau International de l'Heure (BIH) optical astrometric series was combined with SPACE94 to form COMB94, a combined series of smoothed, interpolated polar motion and UT1-UTC values spanning January 20, 1962, to January 27, 1995, at 5-day intervals, and (2) the International Latitude Service (ILS) optical astrometric series was combined with COMB94 to form POLE94, a combined series of smoothed, interpolated polar motion values spanning January 20, 1900, to January 21, 1995, at 30.4375-day intervals.

  2. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  3. Software for C1 interpolation

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1977-01-01

    The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.

  4. Heat Flow Contours and Well Data Around the Milford FORGE Site

    DOE Data Explorer

    Joe Moore

    2016-03-09

    This submission contains a shapefile of heat flow contour lines around the FORGE site located in Milford, Utah. The model was interpolated from data points in the Milford_wells shapefile. This heat flow model was interpolated from 66 data points using the kriging method in Geostatistical Analyst tool of ArcGIS. The resulting model was smoothed 100%. The well dataset contains 59 wells from various sources, with lat/long coordinates, temperature, quality, basement depth, and heat flow. This data was used to make models of the specific characteristics.

  5. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  6. An asteroids' motion simulation using smoothed ephemerides DE405, DE406, DE408, DE421, DE423 and DE722. (Russian Title: Прогнозирование движения астероидов с использованием сглаженных эфемерид DE405, DE406, DE408, DE421, DE423 и DE722)

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.

    2011-07-01

    The results of major planets' and Moon's ephemerides smoothing by cubic polynomials are presented. Considered ephemerides are DE405, DE406, DE408, DE421, DE423 and DE722. The goal of the smoothig is an elimination of discontinu-ous behavior of interpolated coordinates and their derivatives at the junctions of adjacent interpolation intervals when calculations are made with 34-digit decimal accuracy. The reason of such a behavior is a limited 16-digit decimal accuracy of coefficients in ephemerides for interpolating Chebyshev's polynomials. Such discontinuity of perturbing bodies' coordinates signifi-cantly reduces the advantages of 34-digit calculations because the accuracy of numerical integration of asteroids' motion equations increases in this case just by 3 orders to compare with 16-digit calculations. It is demonstrated that the cubic-polynomial smoothing of ephemerides results in elimination of jumps of perturbing bodies' coordinates and their derivatives. This leads to increasing of numerical integration accuracy by 7-9 orders. All calculations in this work were made with 34-digit decimal accuracy on the computer cluster "Skif Cyberia" of Tomsk State University.

  7. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  8. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  9. Wavelet-based group and phase velocity measurements: Method

    NASA Astrophysics Data System (ADS)

    Yang, H. Y.; Wang, W. W.; Hung, S. H.

    2016-12-01

    Measurements of group and phase velocities of surface waves are often carried out by applying a series of narrow bandpass or stationary Gaussian filters localized at specific frequencies to wave packets and estimating the corresponding arrival times at the peak envelopes and phases of the Fourier spectra. However, it's known that seismic waves are inherently nonstationary and not well represented by a sum of sinusoids. Alternatively, a continuous wavelet transform (CWT) which decomposes a time series into a family of wavelets, translated and scaled copies of a generally fast oscillating and decaying function known as the mother wavelet, is capable of retaining localization in both the time and frequency domain and well-suited for the time-frequency analysis of nonstationary signals. Here we develop a wavelet-based method to measure frequency-dependent group and phase velocities, an essential dataset used in crust and mantle tomography. For a given time series, we employ the complex morlet wavelet to obtain the scalogram of amplitude modulus |Wg| and phase φ on the time-frequency plane. The instantaneous frequency (IF) is then calculated by taking the derivative of phase with respect to time, i.e., (1/2π)dφ(f, t)/dt. Time windows comprising strong energy arrivals to be measured can be identified by those IFs close to the frequencies with the maximum modulus and varying smoothly and monotonically with time. The respective IFs in each selected time window are further interpolated to yield a smooth branch of ridge points or representative IFs at which the arrival time, tridge(f), and phase, φridge(f), after unwrapping and correcting cycle skipping based on a priori knowledge of the possible velocity range, are determined for group and phase velocity estimation. We will demonstrate our measurement method using both ambient noise cross correlation functions and multi-mode surface waves from earthquakes. The obtained dispersion curves will be compared with those by a conventional narrow bandpass method.

  10. NASA Tech Briefs, April 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Wearable Environmental and Physiological Sensing Unit; Broadband Phase Retrieval for Image-Based Wavefront Sensing; Filter Function for Wavefront Sensing Over a Field of View; Iterative-Transform Phase Retrieval Using Adaptive Diversity; Wavefront Sensing With Switched Lenses for Defocus Diversity; Smooth Phase Interpolated Keying; Maintaining Stability During a Conducted-Ripple EMC Test; Photodiode Preamplifier for Laser Ranging With Weak Signals; Advanced High-Definition Video Cameras; Circuit for Full Charging of Series Lithium-Ion Cells; Analog Nonvolatile Computer Memory Circuits; JavaGenes Molecular Evolution; World Wind 3D Earth Viewing; Lithium Dinitramide as an Additive in Lithium Power Cells; Accounting for Uncertainties in Strengths of SiC MEMS Parts; Ion-Conducting Organic/Inorganic Polymers; MoO3 Cathodes for High-Temperature Lithium Thin-Film Cells; Counterrotating-Shoulder Mechanism for Friction Stir Welding; Strain Gauges Indicate Differential-CTE-Induced Failures; Antibodies Against Three Forms of Urokinase; Understanding and Counteracting Fatigue in Flight Crews; Active Correction of Aberrations of Low-Quality Telescope Optics; Dual-Beam Atom Laser Driven by Spinor Dynamics; Rugged, Tunable Extended-Cavity Diode Laser; Balloon for Long-Duration, High-Altitude Flight at Venus; and Wide-Temperature-Range Integrated Operational Amplifier.

  11. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    NASA Technical Reports Server (NTRS)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  12. Spectral Topography Generation for Arbitrary Grids

    NASA Astrophysics Data System (ADS)

    Oh, T. J.

    2015-12-01

    A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).

  13. Effect of data gaps on correlation dimension computed from light curves of variable stars

    NASA Astrophysics Data System (ADS)

    George, Sandip V.; Ambika, G.; Misra, R.

    2015-11-01

    Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.

  14. Kriging - a challenge in geochemical mapping

    NASA Astrophysics Data System (ADS)

    Stojdl, Jiri; Matys Grygar, Tomas; Elznicova, Jitka; Popelka, Jan; Vachova, Tatina; Hosek, Michal

    2017-04-01

    Geochemists can easily provide datasets for contamination mapping thanks to recent advances in geographical information systems (GIS) and portable chemical-analytical instrumentation. Kriging is commonly used to visualise the results of such mapping. It is understandable, as kriging is a well-established method of spatial interpolation. It was created in 1950's for geochemical data processing to estimate the most likely distribution of gold based on samples from a few boreholes. However, kriging is based on the assumption of continuous spatial distribution of numeric data that is not realistic in environmental geochemistry. The use of kriging is correct when the data density is sufficient with respect to heterogeneity of the spatial distribution of the geochemical parameters. However, if anomalous geochemical values are focused in hotspots of which boundaries are insufficiently densely sampled, kriging could provide misleading maps with the real contours of hotspots blurred by data smoothing and levelling out individual (isolated) but relevant anomalous values. The data smoothing can thus it results in underestimation of geochemical extremes, which may in fact be of the greatest importance in mapping projects. In our study we characterised hotspots of contamination by uranium and zinc in the floodplain of the Ploučnice River. The first objective of our study was to compare three methods of sampling: random (based on stochastic generation of sampling points), systematic (square grid) and judgemental sampling (based on judgement stemming from principles of fluvial deposition) as the basis for pollution maps. The first detected problem in production of the maps was the reduction of the smoothing effect of kriging using appropriate function of empirical semivariogram and setting the variation of at microscales smaller than the sampling distances to minimum (the "nugget" parameter of semivariogram). Exact interpolators such as Inverse Distance Weighting (IDW) or Radial Basis Functions (RBF) provides better solutions in this respect. The second detected problem was heterogeneous structure of the floodplain: it consists of distinct sedimentary bodies (e.g., natural levees, meander scars, point bars), which have been formed by different process (erosion or deposition on flooding, channel shifts by meandering, channel abandonment). Interpolation through these sedimentary bodies has thus not much sense. Solution is to identify boundaries between sedimentary bodies and interpolation of data with this additional information using exact interpolators with barriers (IDW, RBF or stratified kriging) or regression kriging. Those boundaries can be identified using, e.g., digital elevation model (DEM), dipole electromagnetic profiling (DEMP), gamma spectrometry, or an expertise by a geomorphologist.

  15. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  16. Musical sound analysis/synthesis using vector-quantized time-varying spectra

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Beauchamp, James W.

    2002-11-01

    A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.

  17. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  18. Geographic patterns and dynamics of Alaskan climate interpolated from a sparse station record

    USGS Publications Warehouse

    Fleming, Michael D.; Chapin, F. Stuart; Cramer, W.; Hufford, Gary L.; Serreze, Mark C.

    2000-01-01

    Data from a sparse network of climate stations in Alaska were interpolated to provide 1-km resolution maps of mean monthly temperature and precipitation-variables that are required at high spatial resolution for input into regional models of ecological processes and resource management. The interpolation model is based on thin-plate smoothing splines, which uses the spatial data along with a digital elevation model to incorporate local topography. The model provides maps that are consistent with regional climatology and with patterns recognized by experienced weather forecasters. The broad patterns of Alaskan climate are well represented and include latitudinal and altitudinal trends in temperature and precipitation and gradients in continentality. Variations within these broad patterns reflect both the weakening and reduction in frequency of low-pressure centres in their eastward movement across southern Alaska during the summer, and the shift of the storm tracks into central and northern Alaska in late summer. Not surprisingly, apparent artifacts of the interpolated climate occur primarily in regions with few or no stations. The interpolation model did not accurately represent low-level winter temperature inversions that occur within large valleys and basins. Along with well-recognized climate patterns, the model captures local topographic effects that would not be depicted using standard interpolation techniques. This suggests that similar procedures could be used to generate high-resolution maps for other high-latitude regions with a sparse density of data.

  19. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  20. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  1. Approximate thermochemical tables for some C-H and C-H-O species

    NASA Technical Reports Server (NTRS)

    Bahn, G. S.

    1973-01-01

    Approximate thermochemical tables are presented for some C-H and C-H-O species and for some ionized species, supplementing the JANAF Thermochemical Tables for application to finite-chemical-kinetics calculations. The approximate tables were prepared by interpolation and extrapolation of limited available data, especially by interpolations over chemical families of species. Original estimations have been smoothed by use of a modification for the CDC-6600 computer of the Lewis Research Center PACl Program which was originally prepared for the IBM-7094 computer Summary graphs for various families show reasonably consistent curvefit values, anchored by properties of existing species in the JANAF tables.

  2. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    NASA Astrophysics Data System (ADS)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  3. Fine-granularity inference and estimations to network traffic for SDN.

    PubMed

    Jiang, Dingde; Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  4. Fine-granularity inference and estimations to network traffic for SDN

    PubMed Central

    Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913

  5. An objective isobaric/isentropic technique for upper air analysis

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Endlich, R. M.; Ehernberger, L. J.

    1981-01-01

    An objective meteorological analysis technique is presented whereby both horizontal and vertical upper air analyses are performed. The process used to interpolate grid-point values from the upper-air station data is the same as for grid points on both an isobaric surface and a vertical cross-sectional plane. The nearby data surrounding each grid point are used in the interpolation by means of an anisotropic weighting scheme, which is described. The interpolation for a grid-point potential temperature is performed isobarically; whereas wind, mixing-ratio, and pressure height values are interpolated from data that lie on the isentropic surface that passes through the grid point. Two versions (A and B) of the technique are evaluated by qualitatively comparing computer analyses with subjective handdrawn analyses. The objective products of version A generally have fair correspondence with the subjective analyses and with the station data, and depicted the structure of the upper fronts, tropopauses, and jet streams fairly well. The version B objective products correspond more closely to the subjective analyses, and show the same strong gradients across the upper front with only minor smoothing.

  6. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  7. A GENERAL ALGORITHM FOR THE CONSTRUCTION OF CONTOUR PLOTS

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1994-01-01

    The graphical presentation of experimentally or theoretically generated data sets frequently involves the construction of contour plots. A general computer algorithm has been developed for the construction of contour plots. The algorithm provides for efficient and accurate contouring with a modular approach which allows flexibility in modifying the algorithm for special applications. The algorithm accepts as input data values at a set of points irregularly distributed over a plane. The algorithm is based on an interpolation scheme in which the points in the plane are connected by straight line segments to form a set of triangles. In general, the data is smoothed using a least-squares-error fit of the data to a bivariate polynomial. To construct the contours, interpolation along the edges of the triangles is performed, using the bivariable polynomial if data smoothing was performed. Once the contour points have been located, the contour may be drawn. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 100K of 8-bit bytes. This computer algorithm was developed in 1981.

  8. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  9. An approach for spherical harmonic analysis of non-smooth data

    NASA Astrophysics Data System (ADS)

    Wang, Hansheng; Wu, Patrick; Wang, Zhiyong

    2006-12-01

    A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.

  10. Student Support for Research in Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.

    1999-01-01

    Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.

  11. VizieR Online Data Catalog: 3D correction in 5 photometric systems (Bonifacio+, 2018)

    NASA Astrophysics Data System (ADS)

    Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kucinskas, A.; Prakapavicius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.

    2018-01-01

    We have used the CIFIST grid of CO5BOLD models to investigate the effects of granulation on fluxes and colours of stars of spectral type F, G, and K. We publish tables with 3D corrections that can be applied to colours computed from any 1D model atmosphere. For Teff>=5000K, the corrections are smooth enough, as a function of atmospheric parameters, that it is possible to interpolate the corrections between grid points; thus the coarseness of the CIFIST grid should not be a major limitation. However at the cool end there are still far too few models to allow a reliable interpolation. (20 data files).

  12. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schipper, L.; Ketoff, A.; Meyers, S.

    This summary report presents information on the end-uses of energy in the residential sector of seven major OECD countries over the period 1960-1978. Much of the information contained herein has never been published before. We present data on energy consumption by energy type and end-use for three to five different years for each country. Each year table is complemented by a set of indicators, which are assembled for the entire 20-year period at the end of each country listing. Finally, a set of key indicators from each country is displayed together in a table, allowing comparison for three periods: earlymore » (1960-63), pre-embargo (1970-73), and recent (1975-78). Analysis of these results, smoothing and interpolation of the data, addition of further data, and analytical comparison of in-country and cross-country trends will follow in the next phase of our work.« less

  14. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  15. An improved adaptive interpolation clock recovery loop based on phase splitting algorithm for coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya

    2018-01-01

    Traditional clock recovery scheme achieves timing adjustment by digital interpolation, thus recovering the sampling sequence. Based on this, an improved clock recovery architecture joint channel equalization for coherent optical communication system is presented in this paper. The loop is different from the traditional clock recovery. In order to reduce the interpolation error caused by the distortion in the frequency domain of the interpolator and to suppress the spectral mirroring generated by the sampling rate change, the proposed algorithm joint equalization, improves the original interpolator in the loop, along with adaptive filtering, and makes error compensation for the original signals according to the balanced pre-filtering signals. Then the signals are adaptive interpolated through the feedback loop. Furthermore, the phase splitting timing recovery algorithm is adopted in this paper. The time error is calculated according to the improved algorithm when there is no transition between the adjacent symbols, making calculated timing error more accurate. Meanwhile, Carrier coarse synchronization module is placed before the beginning of timing recovery to eliminate the larger frequency offset interference, which effectively adjust the sampling clock phase. In this paper, the simulation results show that the timing error is greatly reduced after the loop is changed. Based on the phase splitting algorithm, the BER and MSE are better than those in the unvaried architecture. In the fiber channel, using MQAM modulation format, after 100 km-transmission of single-mode fiber, especially when ROF(roll-off factor) values tends to 0, the algorithm shows a better clock performance under different ROFs. When SNR values are less than 8, the BER could achieve 10-2 to 10-1 magnitude. Furthermore, the proposed timing recovery is more suitable for the situation with low SNR values.

  16. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  17. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  18. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  19. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  20. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  1. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  2. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  3. Algebraic grid generation with corner singularities

    NASA Technical Reports Server (NTRS)

    Vinokur, M.; Lombard, C. K.

    1983-01-01

    A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.

  4. Solution of the surface Euler equations for accurate three-dimensional boundary-layer analysis of aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Iyer, V.; Harris, J. E.

    1987-01-01

    The three-dimensional boundary-layer equations in the limit as the normal coordinate tends to infinity are called the surface Euler equations. The present paper describes an accurate method for generating edge conditions for three-dimensional boundary-layer codes using these equations. The inviscid pressure distribution is first interpolated to the boundary-layer grid. The surface Euler equations are then solved with this pressure field and a prescribed set of initial and boundary conditions to yield the velocities along the two surface coordinate directions. Results for typical wing and fuselage geometries are presented. The smoothness and accuracy of the edge conditions obtained are found to be superior to the conventional interpolation procedures.

  5. Beta-function B-spline smoothing on triangulations

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Zanaty, Peter

    2013-03-01

    In this work we investigate a novel family of Ck-smooth rational basis functions on triangulations for fitting, smoothing, and denoising geometric data. The introduced basis function is closely related to a recently introduced general method introduced in utilizing generalized expo-rational B-splines, which provides Ck-smooth convex resolutions of unity on very general disjoint partitions and overlapping covers of multidimensional domains with complex geometry. One of the major advantages of this new triangular construction is its locality with respect to the star-1 neighborhood of the vertex on which the said base is providing Hermite interpolation. This locality of the basis functions can be in turn utilized in adaptive methods, where, for instance a local refinement of the underlying triangular mesh affects only the refined domain, whereas, in other method one needs to investigate what changes are occurring outside of the refined domain. Both the triangular and the general smooth constructions have the potential to become a new versatile tool of Computer Aided Geometric Design (CAGD), Finite and Boundary Element Analysis (FEA/BEA) and Iso-geometric Analysis (IGA).

  6. Constructing polyatomic potential energy surfaces by interpolating diabatic Hamiltonian matrices with demonstration on green fluorescent protein chromophore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jae Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr; Department of Chemistry, Pohang University of Science and Technology

    2014-04-28

    Simulating molecular dynamics directly on quantum chemically obtained potential energy surfaces is generally time consuming. The cost becomes overwhelming especially when excited state dynamics is aimed with multiple electronic states. The interpolated potential has been suggested as a remedy for the cost issue in various simulation settings ranging from fast gas phase reactions of small molecules to relatively slow condensed phase dynamics with complex surrounding. Here, we present a scheme for interpolating multiple electronic surfaces of a relatively large molecule, with an intention of applying it to studying nonadiabatic behaviors. The scheme starts with adiabatic potential information and its diabaticmore » transformation, both of which can be readily obtained, in principle, with quantum chemical calculations. The adiabatic energies and their derivatives on each interpolation center are combined with the derivative coupling vectors to generate the corresponding diabatic Hamiltonian and its derivatives, and they are subsequently adopted in producing a globally defined diabatic Hamiltonian function. As a demonstration, we employ the scheme to build an interpolated Hamiltonian of a relatively large chromophore, para-hydroxybenzylidene imidazolinone, in reference to its all-atom analytical surface model. We show that the interpolation is indeed reliable enough to reproduce important features of the reference surface model, such as its adiabatic energies and derivative couplings. In addition, nonadiabatic surface hopping simulations with interpolation yield population transfer dynamics that is well in accord with the result generated with the reference analytic surface. With these, we conclude by suggesting that the interpolation of diabatic Hamiltonians will be applicable for studying nonadiabatic behaviors of sizeable molecules.« less

  7. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation model is assessed using data from the SMART-1 array in Taiwan. The interpolation model provides an effective method to estimate ground motion at a site using recordings from stations located up to several kilometers away. Reliable estimates of differential ground motion are restricted to relatively limited ranges of frequencies and inter-station spacings.

  8. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.

  9. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Moore, J.; Nicholson, S.; Moore, J. G.

    1985-01-01

    Research at NASA Lewis Research Center gave the opportunity to incorporate new control volumes in the Denton 3-D finite-volume time marching code. For duct flows, the new control volumes require no transverse smoothing and this allows calculations with large transverse gradients in properties without significant numerical total pressure losses. Possibilities for improving the Denton code to obtain better distributions of properties through shocks were demonstrated. Much better total pressure distributions through shocks are obtained when the interpolated effective pressure, needed to stabilize the solution procedure, is used to calculate the total pressure. This simple change largely eliminates the undershoot in total pressure down-stream of a shock. Overshoots and undershoots in total pressure can then be further reduced by a factor of 10 by adopting the effective density method, rather than the effective pressure method. Use of a Mach number dependent interpolation scheme for pressure then removes the overshoot in static pressure downstream of a shock. The stability of interpolation schemes used for the calculation of effective density is analyzed and a Mach number dependent scheme is developed, combining the advantages of the correct perfect gas equation for subsonic flow with the stability of 2-point and 3-point interpolation schemes for supersonic flow.

  10. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  11. Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.

    2011-10-01

    We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of interpolated quantities in the solution of the Riemann Problem. The results of our tests demonstrate that GSPH is in fact a highly promising hydrodynamic scheme, also to be coupled to an N-body solver, for astrophysical and cosmological applications.

  12. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  13. Simulation of Pellet Ablation

    NASA Astrophysics Data System (ADS)

    Parks, P. B.; Ishizaki, Ryuichi

    2000-10-01

    In order to clarify the structure of the ablation flow, 2D simulation is carried out with a fluid code solving temporal evolution of MHD equations. The code includes electrostatic sheath effect at the cloud interface.(P.B. Parks et al.), Plasma Phys. Contr. Fusion 38, 571 (1996). An Eulerian cylindrical coordinate system (r,z) is used with z in a spherical pellet. The code uses the Cubic-Interpolated Psudoparticle (CIP) method(H. Takewaki and T. Yabe, J. Comput. Phys. 70), 355 (1987). that divides the fluid equations into non-advection and advection phases. The most essential element of the CIP method is in calculation of the advection phase. In this phase, a cubic interpolated spatial profile is shifted in space according to the total derivative equations, similarly to a particle scheme. Since the profile is interpolated by using the value and the spatial derivative value at each grid point, there is no numerical oscillation in space, that often appears in conventional spline interpolation. A free boundary condition is used in the code. The possibility of a stationary shock will also be shown in the presentation because the supersonic ablation flow across the magnetic field is impeded.

  14. Gravitational catalysis of merons in Einstein-Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Canfora, Fabrizio; Oh, Seung Hun; Salgado-Rebolledo, Patricio

    2017-10-01

    We construct regular configurations of the Einstein-Yang-Mills theory in various dimensions. The gauge field is of meron-type: it is proportional to a pure gauge (with a suitable parameter λ determined by the field equations). The corresponding smooth gauge transformation cannot be deformed continuously to the identity. In the three-dimensional case we consider the inclusion of a Chern-Simons term into the analysis, allowing λ to be different from its usual value of 1 /2 . In four dimensions, the gravitating meron is a smooth Euclidean wormhole interpolating between different vacua of the theory. In five and higher dimensions smooth meron-like configurations can also be constructed by considering warped products of the three-sphere and lower-dimensional Einstein manifolds. In all cases merons (which on flat spaces would be singular) become regular due to the coupling with general relativity. This effect is named "gravitational catalysis of merons".

  15. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    PubMed

    Jia, Peng; Anderson, John D; Leitner, Michael; Rheingans, Richard

    2016-01-01

    Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of greater needs when resources are limited.

  16. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2014-12-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  17. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Ranero, C. R.

    2012-04-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, whenever this phase can be followed to greater offsets than the primary phases. This increases the quantity of useful information in the data and yields more extensive and better constrained velocity and geometry models. We will present results from benchmark tests for forward and inverse problems, as well as synthetic tests comparing an inversion with refractions only and another one with both refractions and reflections.

  18. C1-Continuous relative permeability and hybrid upwind discretization of three phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Lee, S. H.; Efendiev, Y.

    2016-10-01

    Three-phase flow in a reservoir model has been a major challenge in simulation studies due to slowly convergent iterations in Newton solution of nonlinear transport equations. In this paper, we examine the numerical characteristics of three-phase flow and propose a consistent, "C1-continuous discretization" (to be clarified later) of transport equations that ensures a convergent solution in finite difference approximation. First, we examine three-phase relative permeabilities that are critical in solving nonlinear transport equations. Three-phase relative permeabilities are difficult to measure in the laboratory, and they are often correlated with two-phase relative permeabilities (e.g., oil-gas and water-oil systems). Numerical convergence of non-linear transport equations entails that three-phase relative permeability correlations are a monotonically increasing function of the phase saturation and the consistency conditions of phase transitions are satisfied. The Modified Stone's Method II and the Linear Interpolation Method for three-phase relative permeability are closely examined for their mathematical properties. We show that the Linear Interpolation Method yields C1-continuous three-phase relative permeabilities for smooth solutions if the two phase relative permeabilities are monotonic and continuously differentiable. In the second part of the paper, we extend a Hybrid-Upwinding (HU) method of two-phase flow (Lee, Efendiev and Tchelepi, ADWR 82 (2015) 27-38) to three phase flow. In the HU method, the phase flux is divided into two parts based on the driving forces (in general, it can be divided into several parts): viscous and buoyancy. The viscous-driven and buoyancy-driven fluxes are upwinded differently. Specifically, the viscous flux, which is always co-current, is upwinded based on the direction of the total velocity. The pure buoyancy-induced flux is shown to be only dependent on saturation distributions and counter-current. In three-phase flow, the buoyancy effect can be expressed as a sum of two buoyancy effects from two-phase flows, i.e., oil-water and oil-gas systems. We propose an upwind scheme for the buoyancy flux term from three-phase flow as a sum of two buoyancy terms from two-phase flows. The upwind direction of the buoyancy flux in two phase flow is always fixed such that the heavier fluid goes downward and the lighter fluid goes upward. It is shown that the Implicit Hybrid-Upwinding (IHU) scheme for three-phase flow is locally conservative and produces physically-consistent numerical solutions. As in two phase flow, the primary advantage of the IHU scheme is that the flux of a fluid phase remains continuous and differentiable as the flow regime changes between co-current and counter-current conditions as a function of time, or (Newton) iterations. This is in contrast to the standard phase-potential-based upwinding scheme, in which the overall fractional-flow (flux) function is non-differentiable across the transition between co-current and counter-current flows.

  19. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  20. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  1. A User’s Manual for Fiber Diffraction: The Automated Picker and Huber Diffractometers

    DTIC Science & Technology

    1990-07-01

    17 3. Layer line scan of degummed silk ( Bombyx mori ) ................................. 18...index (arbitrary units) Figure 3. Layer line scan of degummed silk ( Bombyx mori ) showing layers 0 through 6. If the fit is rejected, new values for... originally made at intervals larger than 0.010. The smoothing and interpolation is done by a least-squares polynomial fit to segments of the data. The number

  2. Visual and Statistical Analysis of Digital Elevation Models Generated Using Idw Interpolator with Varying Powers

    NASA Astrophysics Data System (ADS)

    Asal, F. F.

    2012-07-01

    Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical analysis has proven good potential in the DEM quality assessment.

  3. Coherence solution for bidirectional reflectance distributions of surfaces with wavelength-scale statistics.

    PubMed

    Hoover, Brian G; Gamiz, Victor L

    2006-02-01

    The scalar bidirectional reflectance distribution function (BRDF) due to a perfectly conducting surface with roughness and autocorrelation width comparable with the illumination wavelength is derived from coherence theory on the assumption of a random reflective phase screen and an expansion valid for large effective roughness. A general quadratic expansion of the two-dimensional isotropic surface autocorrelation function near the origin yields representative Cauchy and Gaussian BRDF solutions and an intermediate general solution as the sum of an incoherent component and a nonspecular coherent component proportional to an integral of the plasma dispersion function in the complex plane. Plots illustrate agreement of the derived general solution with original bistatic BRDF data due to a machined aluminum surface, and comparisons are drawn with previously published data in the examination of variations with incident angle, roughness, illumination wavelength, and autocorrelation coefficients in the bistatic and monostatic geometries. The general quadratic autocorrelation expansion provides a BRDF solution that smoothly interpolates between the well-known results of the linear and parabolic approximations.

  4. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, John H.; Belcourt, Kenneth Noel

    Completion of the CASL L3 milestone THM.CFD.P6.03 provides a tabular material properties capability to the Hydra code. A tabular interpolation package used in Sandia codes was modified to support the needs of multi-phase solvers in Hydra. Use of the interface is described. The package was released to Hydra under a government use license. A dummy physics was created in Hydra to prototype use of the interpolation routines. Finally, a test using the dummy physics verifies the correct behavior of the interpolation for a test water table. 3

  6. Error detection and data smoothing based on local procedures

    NASA Technical Reports Server (NTRS)

    Guerra, V. M.

    1974-01-01

    An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.

  7. Automatic elastic image registration by interpolation of 3D rotations and translations from discrete rigid-body transformations.

    PubMed

    Walimbe, Vivek; Shekhar, Raj

    2006-12-01

    We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.

  8. Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging

    NASA Astrophysics Data System (ADS)

    Wang, Zong; Shi, Wenjiao

    2017-03-01

    Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.

  9. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.

  10. Restoration of Monotonicity Respecting in Dynamic Regression

    PubMed Central

    Huang, Yijian

    2017-01-01

    Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068

  11. Experimental demonstration of non-iterative interpolation-based partial ICI compensation in100G RGI-DP-CO-OFDM transport systems.

    PubMed

    Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V

    2012-07-02

    We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.

  12. Shock Capturing with PDE-Based Artificial Viscosity for an Adaptive, Higher-Order Discontinuous Galerkin Finite Element Method

    DTIC Science & Technology

    2008-06-01

    Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The

  13. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  14. A geometrical interpretation of the 2n-th central difference

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.

    1972-01-01

    Many algorithms used for data smoothing, data classification and error detection require the calculation of the distance from a point to the polynomial interpolating its 2n neighbors (n on each side). This computation, if performed naively, would require the solution of a system of equations and could create numerical problems. This note shows that if the data is equally spaced, then this calculation can be performed using a simple recursion formula.

  15. Electron parallel closures for various ion charge numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Jeong-Young, E-mail: j.ji@usu.edu; Held, Eric D.; Kim, Sang-Kyeun

    2016-03-15

    Electron parallel closures for the ion charge number Z = 1 [J.-Y. Ji and E. D. Held, Phys. Plasmas 21, 122116 (2014)] are extended for 1 ≤ Z ≤ 10. Parameters are computed for various Z with the same form of the Z = 1 kernels adopted. The parameters are smoothly varying in Z and hence can be used to interpolate parameters and closures for noninteger, effective ion charge numbers.

  16. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  17. Polynomial Similarity Transformation Theory: A smooth interpolation between coupled cluster doubles and projected BCS applied to the reduced BCS Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degroote, M.; Henderson, T. M.; Zhao, J.

    We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less

  18. The substorm cycle as reproduced by global MHD models

    NASA Astrophysics Data System (ADS)

    Gordeev, E.; Sergeev, V.; Tsyganenko, N.; Kuznetsova, M.; Rastäetter, L.; Raeder, J.; Tóth, G.; Lyon, J.; Merkin, V.; Wiltberger, M.

    2017-01-01

    Recently, Gordeev et al. (2015) suggested a method to test global MHD models against statistical empirical data. They showed that four community-available global MHD models supported by the Community Coordinated Modeling Center (CCMC) produce a reasonable agreement with reality for those key parameters (the magnetospheric size, magnetic field, and pressure) that are directly related to the large-scale equilibria in the outer magnetosphere. Based on the same set of simulation runs, here we investigate how the models reproduce the global loading-unloading cycle. We found that in terms of global magnetic flux transport, three examined CCMC models display systematically different response to idealized 2 h north then 2 h south interplanetary magnetic field (IMF) Bz variation. The LFM model shows a depressed return convection and high loading rate during the growth phase as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded/unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. Two other models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. We also demonstrate potential technical problem in the publicly available simulations which is related to postprocessing interpolation and could affect the accuracy of magnetic field tracing and of other related procedures.

  19. The Substorm Cycle as Reproduced by Global MHD Models

    NASA Technical Reports Server (NTRS)

    Gordeev, E.; Sergee, V.; Tsyganenko, N.; Kuznetsova, M.; Rastaetter, Lutz; Raeder, J.; Toth, G.; Lyon, J.; Merkin, V.; Wiltberger, M.

    2017-01-01

    Recently, Gordeev et al. (2015) suggested a method to test global MHD models against statistical empirical data. They showed that four community-available global MHD models supported by the Community Coordinated Modeling Center (CCMC) produce a reasonable agreement with reality for those key parameters (the magnetospheric size, magnetic field, and pressure) that are directly related to the large-scale equilibria in the outer magnetosphere. Based on the same set of simulation runs, here we investigate how the models reproduce the global loading-unloading cycle. We found that in terms of global magnetic flux transport, three examined CCMC models display systematically different response to idealized2 h north then 2 h south interplanetary magnetic field (IMF) Bz variation. The LFM model shows a depressed return convection and high loading rate during the growth phase as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. Two other models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. We also demonstrate potential technical problem in the publicly available simulations which is related to post processing interpolation and could affect the accuracy of magnetic field tracing and of other related procedures.

  20. Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing

    NASA Technical Reports Server (NTRS)

    Schowengerdt, R.; Gray, S.; Park, S. K.

    1984-01-01

    Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.

  1. Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.

    PubMed

    Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming

    2018-01-01

    Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.

  2. Frontal Representation as a Metric of Model Performance

    NASA Astrophysics Data System (ADS)

    Douglass, E.; Mask, A. C.

    2017-12-01

    Representation of fronts detected by altimetry are used to evaluate the performance of the HYCOM global operational product. Fronts are detected and assessed in daily alongtrack altimetry. Then, modeled sea surface height is interpolated to the locations of the alongtrack observations, and the same frontal detection algorithm is applied to the interpolated model output. The percentage of fronts found in the altimetry and replicated in the model gives a score (0-100) that assesses the model's ability to replicate fronts in the proper location with the proper orientation. Further information can be obtained from determining the number of "extra" fronts found in the model but not in the altimetry, and from assessing the horizontal and vertical dimensions of the front in the model as compared to observations. Finally, the sensitivity of this metric to choices regarding the smoothing of noisy alongtrack altimetry observations, and to the minimum size of fronts being analyzed, is assessed.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Austin, Anthony P.; Trefethen, Lloyd N.

    The trigonometric interpolants to a periodic function f in equispaced points converge if f is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if f is continuous. What if the points are perturbed? With equispaced grid spacing h, let each point be perturbed by an arbitrary amount <= alpha h, where alpha is an element of[0, 1/2) is a fixed constant. The Kadec 1/4 theorem of sampling theory suggests there may be trouble for alpha >= 1/4. We show that convergence of both the interpolants and the quadrature estimates is guaranteed for all alpha < 1/2 if fmore » is twice continuously differentiable, with the convergence rate depending on the smoothness of f. More precisely, it is enough for f to have 4 alpha derivatives in a certain sense, and we conjecture that 2 alpha derivatives are enough. Connections with the Fejer-Kalmar theorem are discussed.« less

  4. A Lagrangian particle method with remeshing for tracer transport on the sphere

    DOE PAGES

    Bosler, Peter Andrew; Kent, James; Krasny, Robert; ...

    2017-03-30

    A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less

  5. A Lagrangian particle method with remeshing for tracer transport on the sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter Andrew; Kent, James; Krasny, Robert

    A Lagrangian particle method (called LPM) based on the flow map is presented for tracer transport on the sphere. The particles carry tracer values and are located at the centers and vertices of triangular Lagrangian panels. Remeshing is applied to control particle disorder and two schemes are compared, one using direct tracer interpolation and another using inverse flow map interpolation with sampling of the initial tracer density. Test cases include a moving-vortices flow and reversing-deformational flow with both zero and nonzero divergence, as well as smooth and discontinuous tracers. We examine the accuracy of the computed tracer density and tracermore » integral, and preservation of nonlinear correlation in a pair of tracers. Here, we compare results obtained using LPM and the Lin–Rood finite-volume scheme. An adaptive particle/panel refinement scheme is demonstrated.« less

  6. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  7. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  8. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation

    PubMed Central

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-01-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522

  9. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  10. Gradient-augmented hybrid interface capturing method for incompressible two-phase flow

    NASA Astrophysics Data System (ADS)

    Zheng, Fu; Shi-Yu, Wu; Kai-Xin, Liu

    2016-06-01

    Motivated by inconveniences of present hybrid methods, a gradient-augmented hybrid interface capturing method (GAHM) is presented for incompressible two-phase flow. A front tracking method (FTM) is used as the skeleton of the GAHM for low mass loss and resources. Smooth eulerian level set values are calculated from the FTM interface, and are used for a local interface reconstruction. The reconstruction avoids marker particle redistribution and enables an automatic treatment of interfacial topology change. The cubic Hermit interpolation is employed in all steps of the GAHM to capture subgrid structures within a single spacial cell. The performance of the GAHM is carefully evaluated in a benchmark test. Results show significant improvements of mass loss, clear subgrid structures, highly accurate derivatives (normals and curvatures) and low cost. The GAHM is further coupled with an incompressible multiphase flow solver, Super CE/SE, for more complex and practical applications. The updated solver is evaluated through comparison with an early droplet research. Project supported by the National Natural Science Foundation of China (Grant Nos. 10972010, 11028206, 11371069, 11372052, 11402029, and 11472060), the Science and Technology Development Foundation of China Academy of Engineering Physics (CAEP), China (Grant No. 2014B0201030), and the Defense Industrial Technology Development Program of China (Grant No. B1520132012).

  11. Evaluation of an Area-Based matching algorithm with advanced shape models

    NASA Astrophysics Data System (ADS)

    Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.

    2014-04-01

    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.

  12. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.

  13. Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.

    PubMed

    Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong

    2018-06-01

    Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.

  14. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  15. Transactions of the Army Conference on Applied Mathematics and Computing (2nd) Held at Washington, DC on 22-25 May 1984

    DTIC Science & Technology

    1985-02-01

    0 Here Q denotes the midplane of the plate ?assumed to be a Lipschitzian) with a smooth boundary ", and H (Q) and H (Q) are the Hilbert spaces of...using a reproducing kernel Hilbert space approach, Weinert [8,9] et al, developed a structural correspondence between spline interpolation and linear...597 A Mesh Moving Technique for Time Dependent Partial Differential Equations in Two Space Dimensions David C. Arney and Joseph

  16. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  17. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising

    PubMed Central

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-01-01

    Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886

  18. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.

    PubMed

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-05-16

    Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.

  19. A three dimensional immersed smoothed finite element method (3D IS-FEM) for fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong

    2013-02-01

    A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.

  20. Soft symmetry improvement of two particle irreducible effective actions

    NASA Astrophysics Data System (ADS)

    Brown, Michael J.; Whittingham, Ian B.

    2017-01-01

    Two particle irreducible effective actions (2PIEAs) are valuable nonperturbative techniques in quantum field theory; however, finite truncations of them violate the Ward identities (WIs) of theories with spontaneously broken symmetries. The symmetry improvement (SI) method of Pilaftsis and Teresi attempts to overcome this by imposing the WIs as constraints on the solution; however, the method suffers from the nonexistence of solutions in linear response theory and in certain truncations in equilibrium. Motivated by this, we introduce a new method called soft-symmetry improvement (SSI) which relaxes the constraint. Violations of WIs are allowed but punished in a least-squares implementation of the symmetry improvement idea. A new parameter ξ controls the strength of the constraint. The method interpolates between the unimproved (ξ →∞ ) and SI (ξ →0 ) cases, and the hope is that practically useful solutions can be found for finite ξ . We study the SSI 2PIEA for a scalar O (N ) model in the Hartree-Fock approximation. We find that the method is IR sensitive; the system must be formulated in finite volume V and temperature T =β-1 , and the V β →∞ limit must be taken carefully. Three distinct limits exist. Two are equivalent to the unimproved 2PIEA and SI 2PIEA respectively, and the third is a new limit where the WI is satisfied but the phase transition is strongly first order and solutions can fail to exist depending on ξ . Further, these limits are disconnected from each other; there is no smooth way to interpolate from one to another. These results suggest that any potential advantages of SSI methods, and indeed any application of (S)SI methods out of equilibrium, must occur in finite volume.

  1. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  2. Objective high Resolution Analysis over Complex Terrain with VERA

    NASA Astrophysics Data System (ADS)

    Mayer, D.; Steinacker, R.; Steiner, A.

    2012-04-01

    VERA (Vienna Enhanced Resolution Analysis) is a model independent, high resolution objective analysis of meteorological fields over complex terrain. This system consists of a special developed quality control procedure and a combination of an interpolation and a downscaling technique. Whereas the so called VERA-QC is presented at this conference in the contribution titled "VERA-QC, an approved Data Quality Control based on Self-Consistency" by Andrea Steiner, this presentation will focus on the method and the characteristics of the VERA interpolation scheme which enables one to compute grid point values of a meteorological field based on irregularly distributed observations and topography related aprior knowledge. Over a complex topography meteorological fields are not smooth in general. The roughness which is induced by the topography can be explained physically. The knowledge about this behavior is used to define the so called Fingerprints (e.g. a thermal Fingerprint reproducing heating or cooling over mountainous terrain or a dynamical Fingerprint reproducing positive pressure perturbation on the windward side of a ridge) under idealized conditions. If the VERA algorithm recognizes patterns of one or more Fingerprints at a few observation points, the corresponding patterns are used to downscale the meteorological information in a greater surrounding. This technique allows to achieve an analysis with a resolution much higher than the one of the observational network. The interpolation of irregularly distributed stations to a regular grid (in space and time) is based on a variational principle applied to first and second order spatial and temporal derivatives. Mathematically, this can be formulated as a cost function that is equivalent to the penalty function of a thin plate smoothing spline. After the analysis field has been divided into the Fingerprint components and the unexplained part respectively, the requirement of a smooth distribution is applied to the latter component only (the Fingerprint field is rough by definition). In order to obtain the final analysis field, the unexplained component has to be combined with the weighted Fingerprint patterns. Operationally, VERA is carried out at our Department on an hourly basis analyzing temperature measurements, pressure, wind and precipitation observations for several domains of the whole world. VERA analyses are used for nowcasting purposes, for establishing climate databases and model verification. Furthermore, VERA can be interesting for everyone who possesses a PC but does not have access to a complex data assimilation system which is in general only available at numerical weather prediction centers.

  3. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    PubMed

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  4. Measurement of intervertebral cervical motion by means of dynamic x-ray image processing and data interpolation.

    PubMed

    Bifulco, Paolo; Cesarelli, Mario; Romano, Maria; Fratini, Antonio; Sansone, Mario

    2013-01-01

    Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements.

  5. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  6. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  7. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  8. The concentration dependence of the galaxy–halo connection: Modeling assembly bias with abundance matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.

    Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less

  9. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    PubMed Central

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606

  10. The concentration dependence of the galaxy–halo connection: Modeling assembly bias with abundance matching

    DOE PAGES

    Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.; ...

    2016-12-28

    Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less

  11. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  12. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  13. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  14. Dusty gas with one fluid in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Laibe, Guillaume; Price, Daniel J.

    2014-05-01

    In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be re-formulated to describe the system as a single-fluid mixture. Here, we present a numerical implementation of the one-fluid dusty gas algorithm using smoothed particle hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalized to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: DUSTYBOX, DUSTYWAVE, DUSTYSHOCK and DUSTYOSCILL, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two-fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time-stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.

  15. BRIEF REPORT: A simple interpolation formula for the spectra of power-law and log potentials

    NASA Astrophysics Data System (ADS)

    Hall, Richard L.

    2000-06-01

    Non-relativistic potential models are considered of the pure power V(r) = sgn(q) r q and logarithmic V(r) = ln (r) types. It is shown that, from the spectral viewpoint, these potentials are actually in a single family. The log spectra can be obtained from the power spectra by the limit q→0 taken in a smooth representation Pnl(q) for the eigenvalues Enl(q). A simple approximation formula is developed which yields the first 30 eigenvalues with an error less than 0.04%.

  16. An improved transmutation method for quantitative determination of the components in multicomponent overlapping chromatograms.

    PubMed

    Shao, Xueguang; Yu, Zhengliang; Ma, Chaoxiong

    2004-06-01

    An improved method is proposed for the quantitative determination of multicomponent overlapping chromatograms based on a known transmutation method. To overcome the main limitation of the transmutation method caused by the oscillation generated in the transmutation process, two techniques--wavelet transform smoothing and the cubic spline interpolation for reducing data points--were adopted, and a new criterion was also developed. By using the proposed algorithm, the oscillation can be suppressed effectively, and quantitative determination of the components in both the simulated and experimental overlapping chromatograms is successfully obtained.

  17. Quantum random walks on congested lattices and the effect of dephasing.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Rohde, Peter P

    2016-01-27

    We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker's direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices.

  18. Non-perturbative String Theory from Water Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyer, Ramakrishnan; Johnson, Clifford V.; /Southern California U.

    2012-06-14

    We use a combination of a 't Hooft limit and numerical methods to find non-perturbative solutions of exactly solvable string theories, showing that perturbative solutions in different asymptotic regimes are connected by smooth interpolating functions. Our earlier perturbative work showed that a large class of minimal string theories arise as special limits of a Painleve IV hierarchy of string equations that can be derived by a similarity reduction of the dispersive water wave hierarchy of differential equations. The hierarchy of string equations contains new perturbative solutions, some of which were conjectured to be the type IIA and IIB string theoriesmore » coupled to (4, 4k ? 2) superconformal minimal models of type (A, D). Our present paper shows that these new theories have smooth non-perturbative extensions. We also find evidence for putative new string theories that were not apparent in the perturbative analysis.« less

  19. Estimating index of refraction from polarimetric hyperspectral imaging measurements.

    PubMed

    Martin, Jacob A; Gross, Kevin C

    2016-08-08

    Current material identification techniques rely on estimating reflectivity or emissivity which vary with viewing angle. As off-nadir remote sensing platforms become increasingly prevalent, techniques robust to changing viewing geometries are desired. A technique leveraging polarimetric hyperspectral imaging (P-HSI), to estimate complex index of refraction, N̂(ν̃), an inherent material property, is presented. The imaginary component of N̂(ν̃) is modeled using a small number of "knot" points and interpolation at in-between frequencies ν̃. The real component is derived via the Kramers-Kronig relationship. P-HSI measurements of blackbody radiation scattered off of a smooth quartz window show that N̂(ν̃) can be retrieved to within 0.08 RMS error between 875 cm-1 ≤ ν̃ ≤ 1250 cm-1. P-HSI emission measurements of a heated smooth Pyrex beaker also enable successful N̂(ν̃) estimates, which are also invariant to object temperature.

  20. Essentially Non-Oscillatory and Weighted Essentially Non-Oscillatory Schemes for Hyperbolic Conservation Laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1997-01-01

    In these lecture notes we describe the construction, analysis, and application of ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes for hyperbolic conservation laws and related Hamilton- Jacobi equations. ENO and WENO schemes are high order accurate finite difference schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the approximation level, where a nonlinear adaptive procedure is used to automatically choose the locally smoothest stencil, hence avoiding crossing discontinuities in the interpolation procedure as much as possible. ENO and WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures, such as compressible turbulence simulations and aeroacoustics. These lecture notes are basically self-contained. It is our hope that with these notes and with the help of the quoted references, the reader can understand the algorithms and code them up for applications.

  1. Using smooth sheets to describe groundfish habitat in Alaskan waters, with specific application to two flatfishes

    USGS Publications Warehouse

    Zimmermann, Mark; Reid, Jane A.; Golden, Nadine

    2016-01-01

    In this analysis we demonstrate how preferred fish habitat can be predicted and mapped for juveniles of two Alaskan groundfish species – Pacific halibut (Hippoglossus stenolepis) and flathead sole (Hippoglossoides elassodon) – at five sites (Kiliuda Bay, Izhut Bay, Port Dick, Aialik Bay, and the Barren Islands) in the central Gulf of Alaska. The method involves using geographic information system (GIS) software to extract appropriate information from National Ocean Service (NOS) smooth sheets that are available from NGDC (the National Geophysical Data Center). These smooth sheets are highly detailed charts that include more soundings, substrates, shoreline and feature information than the more commonly-known navigational charts. By bringing the information from smooth sheets into a GIS, a variety of surfaces, such as depth, slope, rugosity and mean grain size were interpolated into raster surfaces. Other measurements such as site openness, shoreline length, proportion of bay that is near shore, areas of rocky reefs and kelp beds, water volumes, surface areas and vertical cross-sections were also made in order to quantify differences between the study sites. Proper GIS processing also allows linking the smooth sheets to other data sets, such as orthographic satellite photographs, topographic maps and precipitation estimates from which watersheds and runoff can be derived. This same methodology can be applied to larger areas, taking advantage of these free data sets to describe predicted groundfish essential fish habitat (EFH) in Alaskan waters.

  2. Using smooth sheets to describe groundfish habitat in Alaskan waters, with specific application to two flatfishes

    NASA Astrophysics Data System (ADS)

    Zimmermann, Mark; Reid, Jane A.; Golden, Nadine

    2016-10-01

    In this analysis we demonstrate how preferred fish habitat can be predicted and mapped for juveniles of two Alaskan groundfish species - Pacific halibut (Hippoglossus stenolepis) and flathead sole (Hippoglossoides elassodon) - at five sites (Kiliuda Bay, Izhut Bay, Port Dick, Aialik Bay, and the Barren Islands) in the central Gulf of Alaska. The method involves using geographic information system (GIS) software to extract appropriate information from National Ocean Service (NOS) smooth sheets that are available from NGDC (the National Geophysical Data Center). These smooth sheets are highly detailed charts that include more soundings, substrates, shoreline and feature information than the more commonly-known navigational charts. By bringing the information from smooth sheets into a GIS, a variety of surfaces, such as depth, slope, rugosity and mean grain size were interpolated into raster surfaces. Other measurements such as site openness, shoreline length, proportion of bay that is near shore, areas of rocky reefs and kelp beds, water volumes, surface areas and vertical cross-sections were also made in order to quantify differences between the study sites. Proper GIS processing also allows linking the smooth sheets to other data sets, such as orthographic satellite photographs, topographic maps and precipitation estimates from which watersheds and runoff can be derived. This same methodology can be applied to larger areas, taking advantage of these free data sets to describe predicted groundfish essential fish habitat (EFH) in Alaskan waters.

  3. Phase retrieval in digital speckle pattern interferometry by use of a smoothed space-frequency distribution.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2003-12-10

    We evaluate the use of a smoothed space-frequency distribution (SSFD) to retrieve optical phase maps in digital speckle pattern interferometry (DSPI). The performance of this method is tested by use of computer-simulated DSPI fringes. Phase gradients are found along a pixel path from a single DSPI image, and the phase map is finally determined by integration. This technique does not need the application of a phase unwrapping algorithm or the introduction of carrier fringes in the interferometer. It is shown that a Wigner-Ville distribution with a smoothing Gaussian kernel gives more-accurate results than methods based on the continuous wavelet transform. We also discuss the influence of filtering on smoothing of the DSPI fringes and some additional limitations that emerge when this technique is applied. The performance of the SSFD method for processing experimental data is then illustrated.

  4. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  5. Utilizing the long-term pavement performance database in evaluating the effectiveness of pavement smoothness

    DOT National Transportation Integrated Search

    2002-03-01

    This research project was performed in two phases. The first phase concentrated on previous related literature review and a nationwide survey to find out the current practices of smoothness specifications. The second phase dealt with collecting and a...

  6. The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators

    NASA Astrophysics Data System (ADS)

    Ahmedov, Anvarjon

    2018-03-01

    In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.

  7. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  8. The moving-least-squares-particle hydrodynamics method (MLSPH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less

  9. Evaluation of the sparse coding super-resolution method for improving image quality of up-sampled images in computed tomography

    NASA Astrophysics Data System (ADS)

    Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki

    2017-02-01

    As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.

  10. Spatial interpolation of hourly precipitation and dew point temperature for the identification of precipitation phase and hydrologic response in a mountainous catchment

    NASA Astrophysics Data System (ADS)

    Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.

    2012-12-01

    In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing so to support an accurate determination of precipitation phase for assessing catchment hydrologic response to the storm.

  11. UltraColor: a new gamut-mapping strategy

    NASA Astrophysics Data System (ADS)

    Spaulding, Kevin E.; Ellson, Richard N.; Sullivan, James R.

    1995-04-01

    Many color calibration and enhancement strategies exist for digital systems. Typically, these approaches are optimized to work well with one class of images, but may produce unsatisfactory results for other types of images. For example, a colorimetric strategy may work well when printing photographic scenes, but may give inferior results for business graphic images because of device color gamut limitations. On the other hand, a color enhancement strategy that works well for business graphics images may distort the color reproduction of skintones and other important photographic colors. This paper describes a method for specifying different color mapping strategies in various regions of color space, while providing a mechanism for smooth transitions between the different regions. The method involves a two step process: (1) constraints are applied so some subset of the points in the input color space explicitly specifying the color mapping function; (2) the color mapping for the remainder of the color values is then determined using an interpolation algorithm that preserves continuity and smoothness. The interpolation algorithm that was developed is based on a computer graphics morphing technique. This method was used to develop the UltraColor gamut mapping strategy, which combines a colorimetric mapping for colors with low saturation levels, with a color enhancement technique for colors with high saturation levels. The result is a single color transformation that produces superior quality for all classes of imagery. UltraColor has been incorporated in several models of Kodak printers including the Kodak ColorEase PS and the Kodak XLS 8600 PS thermal dye sublimation printers.

  12. Interpolating the Coulomb phase of little string theory

    DOE PAGES

    Lin, Ying -Hsuan; Shao, Shu -Heng; Wang, Yifan; ...

    2015-12-03

    We study up to 8-derivative terms in the Coulomb branch effective action of (1,1) little string theory, by collecting results of 4-gluon scattering amplitudes from both perturbative 6D super-Yang-Mills theory up to 4-loop order, and tree-level double scaled little string theory (DSLST). In previous work we have matched the 6-derivative term from the 6D gauge theory to DSLST, indicating that this term is protected on the entire Coulomb branch. The 8-derivative term, on the other hand, is unprotected. In this paper we compute the 8-derivative term by interpolating from the two limits, near the origin and near the infinity onmore » the Coulomb branch, numerically from SU(k) SYM and DSLST respectively, for k=2,3,4,5. We discuss the implication of this result on the UV completion of 6D SYM as well as the strong coupling completion of DSLST. As a result, we also comment on analogous interpolating functions in the Coulomb phase of circle-compactified (2,0) little string theory.« less

  13. A projection method for coupling two-phase VOF and fluid structure interaction simulations

    NASA Astrophysics Data System (ADS)

    Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro

    2018-02-01

    The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.

  14. Attenuation correction of emission PET images with average CT: Interpolation from breath-hold CT

    NASA Astrophysics Data System (ADS)

    Huang, Tzung-Chi; Zhang, Geoffrey; Chen, Chih-Hao; Yang, Bang-Hung; Wu, Nien-Yun; Wang, Shyh-Jen; Wu, Tung-Hsin

    2011-05-01

    Misregistration resulting from the difference of temporal resolution in PET and CT scans occur frequently in PET/CT imaging, which causes distortion in tumor quantification in PET. Respiration cine average CT (CACT) for PET attenuation correction has been reported to improve the misalignment effectively by several papers. However, the radiation dose to the patient from a four-dimensional CT scan is relatively high. In this study, we propose a method to interpolate respiratory CT images over a respiratory cycle from inhalation and exhalation breath-hold CT images, and use the average CT from the generated CT set for PET attenuation correction. The radiation dose to the patient is reduced using this method. Six cancer patients of various lesion sites underwent routine free-breath helical CT (HCT), respiration CACT, interpolated average CT (IACT), and 18F-FDG PET. Deformable image registration was used to interpolate the middle phases of a respiratory cycle based on the end-inspiration and end-expiration breath-hold CT scans. The average CT image was calculated from the eight interpolated CT image sets of middle respiratory phases and the two original inspiration and expiration CT images. Then the PET images were reconstructed by these three methods for attenuation correction using HCT, CACT, and IACT. Misalignment of PET image using either CACT or IACT for attenuation correction in PET/CT was improved. The difference in standard uptake value (SUV) from tumor in PET images was most significant between the use of HCT and CACT, while the least significant between the use of CACT and IACT. Besides the similar improvement in tumor quantification compared to the use of CACT, using IACT for PET attenuation correction reduces the radiation dose to the patient.

  15. Denoising by coupled partial differential equations and extracting phase by backpropagation neural networks for electronic speckle pattern interferometry.

    PubMed

    Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin

    2007-10-20

    We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.

  16. An efficient fully-implicit multislope MUSCL method for multiphase flow with gravity in discrete fractured media

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-06-01

    The first-order methods commonly employed in reservoir simulation for computing the convective fluxes introduce excessive numerical diffusion leading to severe smoothing of displacement fronts. We present a fully-implicit cell-centered finite-volume (CCFV) framework that can achieve second-order spatial accuracy on smooth solutions, while at the same time maintain robustness and nonlinear convergence performance. A novel multislope MUSCL method is proposed to construct the required values at edge centroids in a straightforward and effective way by taking advantage of the triangular mesh geometry. In contrast to the monoslope methods in which a unique limited gradient is used, the multislope concept constructs specific scalar slopes for the interpolations on each edge of a given element. Through the edge centroids, the numerical diffusion caused by mesh skewness is reduced, and optimal second order accuracy can be achieved. Moreover, an improved smooth flux-limiter is introduced to ensure monotonicity on non-uniform meshes. The flux-limiter provides high accuracy without degrading nonlinear convergence performance. The CCFV framework is adapted to accommodate a lower-dimensional discrete fracture-matrix (DFM) model. Several numerical tests with discrete fractured system are carried out to demonstrate the efficiency and robustness of the numerical model.

  17. Signal flow inside the tunnel of Corti

    NASA Astrophysics Data System (ADS)

    de Boer, Egbert; Chen, Fangyi; Zha, Dingjun; Grosh, Karl; Nankali, Amir; Nuttall, Alfred L.

    2018-05-01

    All With the advent of Optical Coherence Tomography (OCT), a variation of the standard laser-interferometer technique, vibrations of various points inside the cochlea can be measured separately and concurrently. In this work we measured vibrations of the basilar membrane (BM) and the Reticular Lamina (RL) in the cochlea of the guinea pig. Stimulus tones had frequencies in the range from 10 to 25 kHz, they were generated and measured with a spacing of 250 Hz. By smoothing and interpolation the spacing was reduced to 50 Hz. We confirmed earlier findings in that in viable animals the responses at the RL are generally larger than those of the BM, and have smaller phase delays. Moreover, these differences are little dependent of the level of stimulation. Our main hypothesis is: stimulation of the stapes primarily excites the structures in the upper (RL) part of the Organ of Corti (OoC) channel. Subsequently, movements of the RL cause movements of the fluid in the OoC channel, which in turn moves the BM. Computation of the sound field generated by the RL yielded results that agree very well with the data. These results thus confirm the hypothesis.

  18. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.

    2012-12-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, which is useful whenever these phases can be followed to greater offsets than the primary ones. This increases the amount of information available from the data, yielding more extensive and better constrained velocity and geometry models. We will present synthetic results from benchmark tests for the forward and inverse problems, as well as from more complex inversion tests for different inversions possibilities such as one with travel times from refracted waves only (i.e. first arrivals) and one with travel-times from both refracted and reflected waves. In addition, we will show some preliminary results for the inversion of real 3-D OBS data acquired off-shore Ecuador and Colombia.

  19. Small black holes in global AdS spacetime

    NASA Astrophysics Data System (ADS)

    Jokela, Niko; Pönni, Arttu; Vuorinen, Aleksi

    2016-04-01

    We study the properties of two-point functions and quasinormal modes in a strongly coupled field theory holographically dual to a small black hole in global anti-de Sitter spacetime. Our results are seen to smoothly interpolate between known limits corresponding to large black holes and thermal AdS space, demonstrating that the Son-Starinets prescription works even when there is no black hole in the spacetime. Omitting issues related to the internal space, the results can be given a field theory interpretation in terms of the microcanonical ensemble, which provides access to energy densities forbidden in the canonical description.

  20. Quantum random walks on congested lattices and the effect of dephasing

    PubMed Central

    Motes, Keith R.; Gilchrist, Alexei; Rohde, Peter P.

    2016-01-01

    We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker’s direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices. PMID:26812924

  1. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.

  2. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424

  3. A proxy for variance in dense matching over homogeneous terrain

    NASA Astrophysics Data System (ADS)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low variance in intensity, the topography was reconstructed entirely. This indicates that to a large extent interpolation was applied. To assess this amount of interpolation processing is done with imagery which is gradually downgraded. Through linking these products with the variance indicator (SNR) this results in a quantitative relation of the interpolation influence onto the topography estimation in respect to contrast. Our proposed method is capable of providing a clear indication of variance in reconstructions from UAV photogrammetry. This indicator has a practical advantage, as it can be implemented before the computational intensive matching phase. As such an acquired dataset can be tested in the field. If an area with too little contrast is identified, camera settings can be adjusted for a new flight, or additional measurements can be done through traditional means.

  4. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  5. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  6. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  7. Edge-based image restoration.

    PubMed

    Rareş, Andrei; Reinders, Marcel J T; Biemond, Jan

    2005-10-01

    In this paper, we propose a new image inpainting algorithm that relies on explicit edge information. The edge information is used both for the reconstruction of a skeleton image structure in the missing areas, as well as for guiding the interpolation that follows. The structure reconstruction part exploits different properties of the edges, such as the colors of the objects they separate, an estimate of how well one edge continues into another one, and the spatial order of the edges with respect to each other. In order to preserve both sharp and smooth edges, the areas delimited by the recovered structure are interpolated independently, and the process is guided by the direction of the nearby edges. The novelty of our approach lies primarily in exploiting explicitly the constraint enforced by the numerical interpretation of the sequential order of edges, as well as in the pixel filling method which takes into account the proximity and direction of edges. Extensive experiments are carried out in order to validate and compare the algorithm both quantitatively and qualitatively. They show the advantages of our algorithm and its readily application to real world cases.

  8. EOS MLS Level 1B Data Processing Software. Version 3

    NASA Technical Reports Server (NTRS)

    Perun, Vincent S.; Jarnot, Robert F.; Wagner, Paul A.; Cofield, Richard E., IV; Nguyen, Honghanh T.; Vuu, Christina

    2011-01-01

    This software is an improvement on Version 2, which was described in EOS MLS Level 1B Data Processing, Version 2.2, NASA Tech Briefs, Vol. 33, No. 5 (May 2009), p. 34. It accepts the EOS MLS Level 0 science/engineering data, and the EOS Aura spacecraft ephemeris/attitude data, and produces calibrated instrument radiances and associated engineering and diagnostic data. This version makes the code more robust, improves calibration, provides more diagnostics outputs, defines the Galactic core more finely, and fixes the equator crossing. The Level 1 processing software manages several different tasks. It qualifies each data quantity using instrument configuration and checksum data, as well as data transmission quality flags. Statistical tests are applied for data quality and reasonableness. The instrument engineering data (e.g., voltages, currents, temperatures, and encoder angles) is calibrated by the software, and the filter channel space reference measurements are interpolated onto the times of each limb measurement with the interpolates being differenced from the measurements. Filter channel calibration target measurements are interpolated onto the times of each limb measurement, and are used to compute radiometric gain. The total signal power is determined and analyzed by each digital autocorrelator spectrometer (DACS) during each data integration. The software converts each DACS data integration from an autocorrelation measurement in the time domain into a spectral measurement in the frequency domain, and estimates separately the spectrally, smoothly varying and spectrally averaged components of the limb port signal arising from antenna emission and scattering effects. Limb radiances are also calibrated.

  9. TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE

    NASA Technical Reports Server (NTRS)

    Vu, B. T.

    1994-01-01

    TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.

  10. Solitons and black holes in non-Abelian Einstein-Born-Infeld theory

    NASA Astrophysics Data System (ADS)

    Dyadichev, V. V.; Gal'tsov, D. V.

    2000-08-01

    Recently it was shown that the Born-Infeld modification of the quadratic Yang-Mills action gives rise to classical particle-like solutions in the flat space which have a striking similarity with the Bartnik-McKinnon solutions obtained within the gravity coupled Yang-Mills theory. We show that both families of solutions are continuously related within the framework of the Einstein-Born-Infeld theory via interpolating sequences of parameters. We also investigate an internal structure of the associated black holes and find that the Born-Infeld non-linearity changes drastically the black hole interior typical for the usual quadratic Yang-Mills theory. In the latter case a generic solution exhibits violent metric oscillations near the singularity. In the Born-Infeld case the generic interior solution is smooth, the metric tends to the standard Schwarzschild type singularity, and we did not observe internal horizons. Smoothing of the `violent' EYM singularity may be interpreted as a result of non-gravitational quantum effects.

  11. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  12. Study of Interpolated Timing Recovery Phase-Locked Loop with Linearly Constrained Adaptive Prefilter for Higher-Density Optical Disc

    NASA Astrophysics Data System (ADS)

    Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu

    2009-03-01

    A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.

  13. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  14. The effect of bathymetric filtering on nearshore process model results

    USGS Publications Warehouse

    Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.

    2009-01-01

    Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.

  15. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  16. Performance comparison of the Prophecy (forecasting) Algorithm in FFT form for unseen feature and time-series prediction

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James

    2013-06-01

    We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.

  17. Uniform high order spectral methods for one and two dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Shu, Chi-Wang

    1991-01-01

    Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.

  18. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  19. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  20. Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex

    NASA Technical Reports Server (NTRS)

    Shelhamer, M.

    2001-01-01

    It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.

  1. Elimination of numerical diffusion in 1 - phase and 2 - phase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajamaeki, M.

    1997-07-01

    The new hydraulics solution method PLIM (Piecewise Linear Interpolation Method) is capable of avoiding the excessive errors, numerical diffusion and also numerical dispersion. The hydraulics solver CFDPLIM uses PLIM and solves the time-dependent one-dimensional flow equations in network geometry. An example is given for 1-phase flow in the case when thermal-hydraulics and reactor kinetics are strongly coupled. Another example concerns oscillations in 2-phase flow. Both the example computations are not possible with conventional methods.

  2. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  3. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  4. Single image super resolution algorithm based on edge interpolation in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  5. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  6. Probabilistic surface reconstruction from multiple data sets: An example for the Australian Moho

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Salmon, M.; Kennett, B. L. N.; Sambridge, M.

    2012-10-01

    Interpolation of spatial data is a widely used technique across the Earth sciences. For example, the thickness of the crust can be estimated by different active and passive seismic source surveys, and seismologists reconstruct the topography of the Moho by interpolating these different estimates. Although much research has been done on improving the quantity and quality of observations, the interpolation algorithms utilized often remain standard linear regression schemes, with three main weaknesses: (1) the level of structure in the surface, or smoothness, has to be predefined by the user; (2) different classes of measurements with varying and often poorly constrained uncertainties are used together, and hence it is difficult to give appropriate weight to different data types with standard algorithms; (3) there is typically no simple way to propagate uncertainties in the data to uncertainty in the estimated surface. Hence the situation can be expressed by Mackenzie (2004): "We use fantastic telescopes, the best physical models, and the best computers. The weak link in this chain is interpreting our data using 100 year old mathematics". Here we use recent developments made in Bayesian statistics and apply them to the problem of surface reconstruction. We show how the reversible jump Markov chain Monte Carlo (rj-McMC) algorithm can be used to let the degree of structure in the surface be directly determined by the data. The solution is described in probabilistic terms, allowing uncertainties to be fully accounted for. The method is illustrated with an application to Moho depth reconstruction in Australia.

  7. TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1994-01-01

    TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.

  8. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-02-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  9. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  10. Impact of Lidar Wind Sounding on Mesoscale Forecast

    NASA Technical Reports Server (NTRS)

    Miller, Timothy L.; Chou, Shih-Hung; Goodman, H. Michael (Technical Monitor)

    2001-01-01

    An Observing System Simulation Experiment (OSSE) was conducted to study the impact of airborne lidar wind sounding on mesoscale weather forecast. A wind retrieval scheme, which interpolates wind data from a grid data system, simulates the retrieval of wind profile from a satellite lidar system. A mesoscale forecast system based on the PSU/NCAR MM5 model is developed and incorporated the assimilation of the retrieved line-of-sight wind. To avoid the "identical twin" problem, the NCEP reanalysis data is used as our reference "nature" atmosphere. The simulated space-based lidar wind observations were retrieved by interpolating the NCEP values to the observation locations. A modified dataset obtained by smoothing the NCEP dataset was used as the initial state whose forecast was sought to be improved by assimilating the retrieved lidar observations. Forecasts using wind profiles with various lidar instrument parameters has been conducted. The results show that to significantly improve the mesoscale forecast the satellite should fly near the storm center with large scanning radius. Increasing lidar firing rate also improves the forecast. Cloud cover and lack of aerosol degrade the quality of the lidar wind data and, subsequently, the forecast.

  11. Investigation on filter method for smoothing spiral phase plate

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  12. Optimal interpolation analysis of leaf area index using MODIS data

    USGS Publications Warehouse

    Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve

    2006-01-01

    A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.

  13. Recent developments in heterodyne laser interferometry at Harbin Institute of Technology

    NASA Astrophysics Data System (ADS)

    Hu, P. C.; Tan, J. B. B.; Yang, H. X. X.; Fu, H. J. J.; Wang, Q.

    2013-01-01

    In order to fulfill the requirements for high-resolution and high-precision heterodyne interferometric technologies and instruments, the laser interferometry group of HIT has developed some novel techniques for high-resolution and high-precision heterodyne interferometers, such as high accuracy laser frequency stabilization, dynamic sub-nanometer resolution phase interpolation and dynamic nonlinearity measurement. Based on a novel lock point correction method and an asymmetric thermal structure, the frequency stabilized laser achieves a long term stability of 1.2×10-8, and it can be steadily stabilized even in the air flowing up to 1 m/s. In order to achieve dynamic sub-nanometer resolution of laser heterodyne interferometers, a novel phase interpolation method based on digital delay line is proposed. Experimental results show that, the proposed 0.62 nm, phase interpolator built with a 64 multiple PLL and an 8-tap digital delay line achieves a static accuracy better than 0.31nm and a dynamic accuracy better than 0.62 nm over the velocity ranging from -2 m/s to 2 m/s. Meanwhile, an accuracy beam polarization measuring setup is proposed to check and ensure the light's polarization state of the dual frequency laser head, and a dynamic optical nonlinearity measuring setup is built to measure the optical nonlinearity of the heterodyne system accurately and quickly. Analysis and experimental results show that, the beam polarization measuring setup can achieve an accuracy of 0.03° in ellipticity angles and an accuracy of 0.04° in the non-orthogonality angle respectively, and the optical nonlinearity measuring setup can achieve an accuracy of 0.13°.

  14. Using geographical information systems and cartograms as a health service quality improvement tool.

    PubMed

    Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek

    2014-07-01

    Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-01

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  16. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.

    PubMed

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-21

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  17. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  18. Localization of estrogen receptor α and progesterone receptor B in bovine cervix and vagina during the follicular and luteal phases of the sexual cycle.

    PubMed

    Sağsöz, H; Akbalik, M E; Saruhan, B G; Ketani, M A

    2011-08-01

    The localization and distribution of estrogen receptors (ERα) and progesterone receptors (PR-B) in the cervix and vagina of sexually mature bovines during the follicular and luteal phases of the sexual cycle were studied using immunohistochemistry. The estrous cycle stage of 23 Holstein bovines was assessed by gross and histological appearance of ovaries and blood steroid hormone values. Tissue samples from cervix and vagina were fixed in 10% formaldehyde for routine histological processing. Nuclear staining for ERα and PR-B was observed in the epithelial cells of the surface epithelium, stromal cells and smooth muscle cells. Generally, in the cervix, ERα immunoreactivity was more intense in the epithelial and smooth muscle cells during the follicular phase and in the epithelial cells during the luteal phase (p < 0.05). PR-B immunoreactivity was more intense in the epithelial and smooth muscle cells than in the superficial and deep stromal cells during the follicular and luteal phases (p < 0.05). In the vagina, ERα and PR-B immunoreactivities were more intense in the epithelial cells than in the connective tissue cells and smooth muscle cells during the follicular and luteal phases (p < 0.05). These results indicated that the frequency and intensity of ERα and PR-B immunoreactivity in the cervix and vagina of bovines varied according to the cervical and vaginal cell types and the phases of the sexual cycle.

  19. System for obtaining smooth laser beams where intensity variations are reduced by spectral dispersion of the laser light (SSD)

    DOEpatents

    Skupsky, S.; Kessler, T.J.; Short, R.W.; Craxton, S.; Letzring, S.A.; Soures, J.

    1991-09-10

    In an SSD (smoothing by spectral dispersion) system which reduces the time-averaged spatial variations in intensity of the laser light to provide uniform illumination of a laser fusion target, an electro-optic phase modulator through which a laser beam passes produces a broadband output beam by imposing a frequency modulated bandwidth on the laser beam. A grating provides spatial and angular spectral dispersion of the beam. Due to the phase modulation, the frequencies (''colors'') cycle across the beam. The dispersed beam may be amplified and frequency converted (e.g., tripled) in a plurality of beam lines. A distributed phase plate (DPP) in each line is irradiated by the spectrally dispersed beam and the beam is focused on the target where a smooth (uniform intensity) pattern is produced. The color cycling enhances smoothing and the use of a frequency modulated laser pulse prevents the formation of high intensity spikes which could damage the laser medium in the power amplifiers. 8 figures.

  20. System for obtaining smooth laser beams where intensity variations are reduced by spectral dispersion of the laser light (SSD)

    DOEpatents

    Skupsky, Stanley; Kessler, Terrance J.; Short, Robert W.; Craxton, Stephen; Letzring, Samuel A.; Soures, John

    1991-01-01

    In an SSD (smoothing by spectral dispersion) system which reduces the time-averaged spatial variations in intensity of the laser light to provide uniform illumination of a laser fusion target, an electro-optic phase modulator through which a laser beam passes produces a broadband output beam by imposing a frequency modulated bandwidth on the laser beam. A grating provides spatial and angular spectral dispersion of the beam. Due to the phase modulation, the frequencies ("colors") cycle across the beam. The dispersed beam may be amplified and frequency converted (e.g., tripled) in a plurality of beam lines. A distributed phase plate (DPP) in each line is irradiated by the spectrally dispersed beam and the beam is focused on the target where a smooth (uniform intensity) pattern is produced. The color cycling enhances smoothing and the use of a frequency modulated laser pulse prevents the formation of high intensity spikes which could damage the laser medium in the power amplifiers.

  1. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  2. Cumulative area of peaks in a multidimensional high performance liquid chromatogram.

    PubMed

    Stevenson, Paul G; Guiochon, Georges

    2013-09-20

    An algorithm was developed to recognize peaks in a multidimensional separation and calculate their cumulative peak area. To find the retention times of peaks in a one dimensional chromatogram, the Savitzky-Golay smoothing filter was used to smooth and find the first through third derivatives of the experimental profiles. Close examination of the shape of these curves informs on the number of peaks that are present and provides starting values for fitting theoretical profiles. Due to the nature of comprehensive multidimensional HPLC, adjacent cut fractions may contain compounds common to more than one cut fraction. The algorithm determines which components were common in adjacent cuts and subsequently calculates the area of a two-dimensional peak profile by interpolating the surface of the 2D peaks between adjacent peaks. This algorithm was tested by calculating the cumulative peak area of a series of 2D-HPLC separations of alkylbenzenes, phenol and caffeine with varied concentrations. A good relationship was found between the concentration and the cumulative peak area. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Scripting human animations in a virtual environment

    NASA Technical Reports Server (NTRS)

    Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.

    1994-01-01

    The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.

  4. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  5. Geochemical landscapes of the conterminous United States; new map presentations for 22 elements

    USGS Publications Warehouse

    Gustavsson, N.; Bolviken, B.; Smith, D.B.; Severson, R.C.

    2001-01-01

    Geochemical maps of the conterminous United States have been prepared for seven major elements (Al, Ca, Fe, K, Mg, Na, and Ti) and 15 trace elements (As, Ba, Cr, Cu, Hg, Li, Mn, Ni, Pb, Se, Sr, V, Y, Zn, and Zr). The maps are based on an ultra low-density geochemical survey consisting of 1,323 samples of soils and other surficial materials collected from approximately 1960-1975. The data were published by Boerngen and Shacklette (1981) and black-and-white point-symbol geochemical maps were published by Shacklette and Boerngen (1984). The data have been reprocessed using weighted-median and Bootstrap procedures for interpolation and smoothing.

  6. Two-Point Turbulence Closure Applied to Variable Resolution Modeling

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Rubinstein, Robert

    2011-01-01

    Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.

  7. The application of color display techniques for the analysis of Nimbus infrared radiation data

    NASA Technical Reports Server (NTRS)

    Allison, L. J.; Cherrix, G. T.; Ausfresser, H.

    1972-01-01

    A color enhancement system designed for the Applications Technology Satellite (ATS) spin scan experiment has been adapted for the analysis of Nimbus infrared radiation measurements. For a given scene recorded on magnetic tape by the Nimbus scanning radiometers, a virtually unlimited number of color images can be produced at the ATS Operations Control Center from a color selector paper tape input. Linear image interpolation has produced radiation analyses in which each brightness-color interval has a smooth boundary without any mosaic effects. An annotated latitude-longitude gridding program makes it possible to precisely locate geophysical parameters, which permits accurate interpretation of pertinent meteorological, geological, hydrological, and oceanographic features.

  8. Infrared realization of dS2 in AdS2

    NASA Astrophysics Data System (ADS)

    Anninos, Dionysios; Hofman, Diego M.

    2018-04-01

    We describe a two-dimensional geometry that smoothly interpolates between an asymptotically AdS2 geometry and the static patch of dS2. We find this ‘centaur’ geometry to be a solution of dilaton gravity with a specific class of potentials for the dilaton. We interpret the centaur geometry as a thermal state in the putative quantum mechanics dual to the AdS2 evolved with the global Hamiltonian. We compute the thermodynamic properties and observe that the centaur state has finite entropy and positive specific heat. The static patch is the infrared part of the centaur geometry. We discuss boundary observables sensitive to the static patch region.

  9. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  10. Mars global reference atmosphere model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie F.

    1992-01-01

    Mars-GRAM is an empirical model that parameterizes the temperature, pressure, density, and wind structure of the Martian atmosphere from the surface through thermospheric altitudes. In the lower atmosphere of Mars, the model is built around parameterizations of height, latitudinal, longitudinal, and seasonal variations of temperature determined from a survey of published measurements from the Mariner and Viking programs. Pressure and density are inferred from the temperature by making use of the hydrostatic and perfect gas laws relationships. For the upper atmosphere, the thermospheric model of Stewart is used. A hydrostatic interpolation routine is used to insure a smooth transition from the lower portion of the model to the Stewart thermospheric model. Other aspects of the model are discussed.

  11. Integrable flows between exact CFTs

    NASA Astrophysics Data System (ADS)

    Georgiou, George; Sfetsos, Konstantinos

    2017-11-01

    We explicitly construct families of integrable σ-model actions smoothly inter-polating between exact CFTs. In the ultraviolet the theory is the direct product of two current algebras at levels k 1 and k 2. In the infrared and for the case of two deformation matrices the CFT involves a coset CFT, whereas for a single matrix deformation it is given by the ultraviolet direct product theories but at levels k 1 and k 2 - k 1. For isotropic deformations we demonstrate integrability. In this case we also compute the exact beta-function for the deformation parameters using gravitational methods. This is shown to coincide with previous results obtained using perturbation theory and non-perturbative symmetries.

  12. Effects of deterministic surface distortions on reflector antenna performance

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    Systematic distortions of reflector antenna surfaces can cause antenna radiation patterns to be undesirably different from those of perfectly smooth reflector surfaces. In this paper, a simulation model for systematic distortions is described which permits an efficient computation of the effects of distortions in the reflector pattern. The model uses a vector diffraction physical optics analysis for the determination of both the co-polar and cross-polar fields. An interpolation scheme is also presented for the description of reflector surfaces which are prescribed by discrete points. Representative numerical results are presented for reflectors with sinusoidally and thermally distorted surfaces. Finally, comparisons are made between the measured and calculated patterns of a slowly-varying distorted offset parabolic reflector.

  13. The electric field in capacitively coupled RF discharges: a smooth step model that includes thermal and dynamic effects

    NASA Astrophysics Data System (ADS)

    Brinkmann, Ralf Peter

    2015-12-01

    The electric field in radio-frequency driven capacitively coupled plasmas (RF-CCP) is studied, taking thermal (finite electron temperature) and dynamic (finite electron mass) effects into account. Two dimensionless numbers are introduced, the ratios ε ={λ\\text{D}}/l of the electron Debye length {λ\\text{D}} to the minimum plasma gradient length l (typically the sheath thickness) and η ={ω\\text{RF}}/{ω\\text{pe}} of the RF frequency {ω\\text{RF}} to the electron plasma frequency {ω\\text{pe}} . Assuming both numbers small but finite, an asymptotic expansion of an electron fluid model is carried out up to quadratic order inclusively. An expression for the electric field is obtained which yields (i) the space charge field in the sheath, (ii) the generalized Ohmic and ambipolar field in the plasma, and (iii) a smooth interpolation for the transition in between. The new expression is a direct generalization of the Advanced Algebraic Approximation (AAA) proposed by the same author (2009 J. Phys. D: Appl. Phys. 42 194009), which can be recovered for η \\to 0 , and of the established Step Model (SM) by Godyak (1976 Sov. J. Plasma Phys. 2 78), which corresponds to the simultaneous limits η \\to 0 , ε \\to 0 . A comparison of the hereby proposed Smooth Step Model (SSM) with a numerical solution of the full dynamic problem proves very satisfactory.

  14. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    NASA Astrophysics Data System (ADS)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  15. Using reweighting and free energy surface interpolation to predict solid-solid phase diagrams

    NASA Astrophysics Data System (ADS)

    Schieber, Natalie P.; Dybeck, Eric C.; Shirts, Michael R.

    2018-04-01

    Many physical properties of small organic molecules are dependent on the current crystal packing, or polymorph, of the material, including bioavailability of pharmaceuticals, optical properties of dyes, and charge transport properties of semiconductors. Predicting the most stable crystalline form at a given temperature and pressure requires determining the crystalline form with the lowest relative Gibbs free energy. Effective computational prediction of the most stable polymorph could save significant time and effort in the design of novel molecular crystalline solids or predict their behavior under new conditions. In this study, we introduce a new approach using multistate reweighting to address the problem of determining solid-solid phase diagrams and apply this approach to the phase diagram of solid benzene. For this approach, we perform sampling at a selection of temperature and pressure states in the region of interest. We use multistate reweighting methods to determine the reduced free energy differences between T and P states within a given polymorph and validate this phase diagram using several measures. The relative stability of the polymorphs at the sampled states can be successively interpolated from these points to create the phase diagram by combining these reduced free energy differences with a reference Gibbs free energy difference between polymorphs. The method also allows for straightforward estimation of uncertainties in the phase boundary. We also find that when properly implemented, multistate reweighting for phase diagram determination scales better with the size of the system than previously estimated.

  16. The anamorphic universe

    NASA Astrophysics Data System (ADS)

    Ijjas, Anna; Steinhardt, Paul J.

    2015-10-01

    We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.

  17. Lipopolysaccharide variation in Coxiella burnetti: intrastrain heterogeneity in structure and antigenicity.

    PubMed Central

    Hackstadt, T; Peacock, M G; Hitchcock, P J; Cole, R L

    1985-01-01

    We isolated lipopolysaccharides (LPSs) from phase variants of Coxiella burnetii Nine Mile and compared the isolated LPS and C. burnetii cells by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and immunoblotting. The LPSs were found to be the predominant component which varied structurally and antigenically between virulent phase I and avirulent phase II. A comparison of techniques historically used to extract the phase I antigenic component revealed that the aqueous phase of phenol-water, trichloroacetic acid, and dimethyl sulfoxide extractions of phase I C. burnettii cells all contained phase I LPS, although the efficiency and specificity of extraction varied. Our studies provide additional evidence that phase variation in C. burnetii is analogous to the smooth-to-rough LPS variation of gram-negative enteric bacteria, with phase I LPS being equivalent to smooth LPS and phase II being equivalent to rough LPS. In addition, we identified a variant with a third LPS chemotype with appears to have a structural complexity intermediate to phase I and II LPSs. All three C. burnetii LPS contain a 2-keto-3-deoxyoctulosonic acid-like substance, heptose, and gel Limulus amoebocyte lysates in subnanogram amounts. The C. burnetii LPSs were nontoxic to chicken embryos at doses of over 80 micrograms per embryo, in contrast to Salmonella typhimurium smooth- and rough-type LPSs, which were toxic in nanogram amounts. Images PMID:3988339

  18. ECO2N V2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Lehua; Spycher, Nicolas; Doughty, Christine

    2015-02-01

    ECO2N V2.0 is a fluid property module for the TOUGH2 simulator (Version 2.1) that was designed for applications to geologic sequestration of CO2 in saline aquifers and enhanced geothermal reservoirs. ECO2N V2.0 is an enhanced version of the previous ECO2N V1.0 module (Pruess, 2005). It expands the temperature range up to about 300oC whereas V1.0 can only be used for temperatures below about 110oC. V2.0 includes a comprehensive description of the thermodynamic and thermophysical properties of H2O - NaCl - CO2 mixtures, that reproduces fluid properties largely within experimental error for the temperature, pressure and salinity conditions 10 °C 109oC). In the transition range (99-109oC), a smooth interpolation is applied to estimate the partitioning as a function of the temperature. Flow processes can be modeled isothermally or non-isothermally, and phase conditions represented may include a single (aqueous or CO2-rich) phase, as well as two-phase (brine-CO2) mixtures. Fluid phases may appear or disappear in the course of a simulation, and solid salt may precipitate or dissolve. Note that the model cannot be applied to subcritical conditions that involves both liquid and gaseous CO2 unless thermol process is ignored (i.e.,isothermal run). For those cases, a user may use the fluid property module ECO2M (Pruess, 2011) instead« less

  19. ECO2N V. 2.0: A New TOUGH2 Fluid Property Module for Mixtures of Water, NaCl, and CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, L.; Spycher, N.; Doughty, C.

    2014-12-01

    ECO2N V2.0 is a fluid property module for the TOUGH2 simulator (Version 2.1) that was designed for applications to geologic sequestration of CO 2 in saline aquifers and enhanced geothermal reservoirs. ECO2N V2.0 is an enhanced version of the previous ECO2N V1.0 module (Pruess, 2005). It expands the temperature range up to about 300°C whereas V1.0 can only be used for temperatures below about 110°C. V2.0 includes a comprehensive description of the thermodynamics and thermophysical properties of H 2O - NaCl -CO 2 mixtures, that reproduces fluid properties largely within experimental error for the temperature, pressure and salinity conditions ofmore » interest (10 °C < T < 300 °C; P < 600 bar; salinity up to halite saturation). This includes density, viscosity, and specific enthalpy of fluid phases as functions of temperature, pressure, and composition, as well as partitioning of mass components H 2O, NaCl and CO 2 among the different phases. In particular, V2.0 accounts for the effects of water on the thermophysical properties of the CO 2-rich phase, which was ignored in V1.0, using a model consistent with the solubility models developed by Spycher and Pruess (2005, 2010). In terms of solubility models, V2.0 uses the same model for partitioning of mass components among the different phases (Spycher and Pruess, 2005) as V1.0 for the low temperature range (<99°C) but uses a new model (Spycher and Pruess, 2010) for the high temperature range (>109°C). In the transition range (99-109°C), a smooth interpolation is applied to estimate the partitioning as a function of the temperature. Flow processes can be modeled isothermally or non-isothermally, and phase conditions represented may include a single (aqueous or CO 2-rich) phase, as well as two-phase mixtures. Fluid phases may appear or disappear in the course of a simulation, and solid salt may precipitate or dissolve. This report gives technical specifications of ECO2N V2.0 and includes instructions for preparing input data« less

  20. On Utilization of NEXRAD Scan Strategy Information to Infer Discrepancies Associated With Radar and Rain Gauge Surface Volumetric Rainfall Accumulations

    NASA Technical Reports Server (NTRS)

    Roy, Biswadev; Datta, Saswati; Jones, W. Linwood; Kasparis, Takis; Einaudi, Franco (Technical Monitor)

    2000-01-01

    To evaluate the Tropical Rainfall Measuring Mission (TRMM) monthly Ground Validation (GV) rain map, 42 quality controlled tipping bucket rain gauge data (1 minute interpolated rain rates) were utilized. We have compared the gauge data to the surface volumetric rainfall accumulation of NEXRAD reflectivity field, (converting to rain rates using a 0.5 dB resolution smooth Z-R table). The comparison was carried out from data collected at Melbourne, Florida during the month of July 98. GV operational level 3 (L3 monthly) accumulation algorithm was used to obtain surface volumetric accumulations for the radar. The gauge records were accumulated using the 1 minute interpolated rain rates while the radar Volume Scan (VOS) intervals remain less than or equal to 75 minutes. The correlation coefficient for the radar and gauge totals for the monthly time-scale remain at 0.93, however, a large difference was noted between the gauge and radar derived rain accumulation when the radar data interval is either 9 minute, or 10 minute. This difference in radar and gauge accumulation is being explained in terms of the radar scan strategy information. The discrepancy in terms of the Volume Coverage Pattern (VCP) of the NEXRAD is being reported where VCP mode is ascertained using the radar tilt angle information. Hourly radar and gauge accumulations have been computed using the present operational L3 method supplemented with a threshold period of +/- 5 minutes (based on a sensitivity analysis). These radar and gauge accumulations are subsequently improved using a radar hourly scan weighting factor (taking ratio of the radar scan frequency within a time bin to the 7436 total radar scans for the month). This GV procedure is further being improved by introducing a spatial smoothing method to yield reasonable bulk radar to gauge ratio for the hourly and daily scales.

  1. Online, On Demand Access to Coastal Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Long, J.; Bristol, S.; Long, D.; Thompson, S.

    2014-12-01

    Process-based numerical models for coastal waves, water levels, and sediment transport are initialized with digital elevation models (DEM) constructed by interpolating and merging bathymetric and topographic elevation data. These gridded surfaces must seamlessly span the land-water interface and may cover large regions where the individual raw data sources are collected at widely different spatial and temporal resolutions. In addition, the datasets are collected from different instrument platforms with varying accuracy and may or may not overlap in coverage. The lack of available tools and difficulties in constructing these DEMs lead scientists to 1) rely on previously merged, outdated, or over-smoothed DEMs; 2) discard more recent data that covers only a portion of the DEM domain; and 3) use inconsistent methodologies to generate DEMs. The objective of this work is to address the immediate need of integrating land and water-based elevation data sources and streamline the generation of a seamless data surface that spans the terrestrial-marine boundary. To achieve this, the U.S. Geological Survey (USGS) is developing a web processing service to format and initialize geoprocessing tasks designed to create coastal DEMs. The web processing service is maintained within the USGS ScienceBase data management system and has an associated user interface. Through the map-based interface, users define a geographic region that identifies the bounds of the desired DEM and a time period of interest. This initiates a query for elevation datasets within federal science agency data repositories. A geoprocessing service is then triggered to interpolate, merge, and smooth the data sources creating a DEM based on user-defined configuration parameters. Uncertainty and error estimates for the DEM are also returned by the geoprocessing service. Upon completion, the information management platform provides access to the final gridded data derivative and saves the configuration parameters for future reference. The resulting products and tools developed here could be adapted to future data sources and projects beyond the coastal environment.

  2. Weak variations of Lipschitz graphs and stability of phase boundaries

    NASA Astrophysics Data System (ADS)

    Grabovsky, Yury; Kucher, Vladislav A.; Truskinovsky, Lev

    2011-03-01

    In the case of Lipschitz extremals of vectorial variational problems, an important class of strong variations originates from smooth deformations of the corresponding non-smooth graphs. These seemingly singular variations, which can be viewed as combinations of weak inner and outer variations, produce directions of differentiability of the functional and lead to singularity-centered necessary conditions on strong local minima: an equality, arising from stationarity, and an inequality, implying configurational stability of the singularity set. To illustrate the underlying coupling between inner and outer variations, we study in detail the case of smooth surfaces of gradient discontinuity representing, for instance, martensitic phase boundaries in non-linear elasticity.

  3. Efficient C1-continuous phase-potential upwind (C1-PPU) schemes for coupled multiphase flow and transport with gravity

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-10-01

    In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.

  4. The anamorphic universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu

    We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less

  5. Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study

    NASA Astrophysics Data System (ADS)

    Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.

    2008-06-01

    Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.

  6. Rate-independent dissipation in phase-field modelling of displacive transformations

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2018-05-01

    In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.

  7. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  8. Phase History Decomposition for efficient Scatterer Classification in SAR Imagery

    DTIC Science & Technology

    2011-09-15

    frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter

  9. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    PubMed

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  10. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-07-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  11. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  12. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  13. Topological analysis of the CfA redshift survey

    NASA Technical Reports Server (NTRS)

    Vogeley, Michael S.; Park, Changbom; Geller, Margaret J.; Huchra, John P.; Gott, J. Richard, III

    1994-01-01

    We study the topology of large-scale structure in the Center for Astrophysics Redshift Survey, which now includes approximately 12,000 galaxies with limiting magnitude m(sub B) is less than or equal to 15.5. The dense sampling and large volume of this survey allow us to compute the topology on smoothing scales from 6 to 20/h Mpc; we thus examine the topology of structure in both 'nonlinear' and 'linear' regimes. On smoothing scales less than or equal to 10/h Mpc this sample has 3 times the number of resolution elements of samples examined in previous studies. Isodensity surface of the smoothed galaxy density field demonstrate that coherent high-density structures and large voids dominate the galaxy distribution. We compute the genus-threshold density relation for isodensity surfaces of the CfA survey. To quantify phase correlation in these data, we compare the CfA genus with the genus of realizations of Gaussian random fields with the power spectrum measured for the CfA survey. On scales less than or equal to 10/h Mpc the observed genus amplitude is smaller than random phase (96% confidence level). This decrement reflects the degree of phase coherence in the observed galaxy distribution. In other words the genus amplitude on these scales is not good measure of the power spectrum slope. On scales greater than 10/h Mpc, where the galaxy distribution is rougly in the 'linear' regime, the genus ampitude is consistent with the random phase amplitude. The shape of the genus curve reflects the strong coherence in the observed structure; the observed genus curve appears broader than random phase (94% confidence level for smoothing scales less than or equal to 10/h Mpc) because the topolgoy is spongelike over a very large range of density threshold. This departre from random phase consistent with a distribution like a filamentary net of 'walls with holes.' On smoothing scales approaching approximately 20/h Mpc the shape of the CfA genus curve is consistent with random phase. There is very weak evidence for a shift of the genus toward a 'bubble-like' topology. To test cosmological models, we compute the genus for mock CfA surveys drawn from large (L greater than or approximately 400/h Mpc) N-body simulations of three variants of the cold dark matter (CDM) cosmogony. The genus amplitude of the 'standard' CDM model (omega h = 0.5, b = 1.5) differs from the observations (96% confidence level) on smoothing scales is less than or approximately 10/h Mpc. An open CDM model (omega h = 0.2) and a CDM model with nonzero cosmological constant (omega h = 0.24, lambda (sub 0) = 0.6) are consistent with the observed genus amplitude over the full range of smoothing scales. All of these models fail (97% confidence level) to match the broadness of the observed genus curve on smoothing scales is less than or equal to 10/h Mpc.

  14. Full-field stress determination in photoelasticity with phase shifting technique

    NASA Astrophysics Data System (ADS)

    Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng

    2018-04-01

    Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.

  15. A Novel Approach to Visualizing Dark Matter Simulations.

    PubMed

    Kaehler, R; Hahn, O; Abel, T

    2012-12-01

    In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; Hsin, Po-Shen; Niu, Yuezhen, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r01222031@ntu.edu.tw, E-mail: yuezhenniu@gmail.com

    We investigate the entropy evolution in the early universe by computing the change of the entanglement entropy in Freedmann-Robertson-Walker quantum cosmology in the presence of particle horizon. The matter is modeled by a Chaplygin gas so as to provide a smooth interpolation between inflationary and radiation epochs, rendering the evolution of entropy from early time to late time trackable. We found that soon after the onset of the inflation, the total entanglement entropy rapidly decreases to a minimum. It then rises monotonically in the remainder of the inflation epoch as well as the radiation epoch. Our result is in qualitativemore » agreement with the area law of Ryu and Takayanagi including the logarithmic correction. We comment on the possible implication of our finding to the cosmological entropy problem.« less

  17. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  18. A nonclassical Radau collocation method for solving the Lane-Emden equations of the polytropic index 4.75 ≤ α < 5

    NASA Astrophysics Data System (ADS)

    Tirani, M. D.; Maleki, M.; Kajani, M. T.

    2014-11-01

    A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.

  19. Electronic transport coefficients in plasmas using an effective energy-dependent electron-ion collision-frequency

    NASA Astrophysics Data System (ADS)

    Faussurier, G.; Blancard, C.; Combis, P.; Decoster, A.; Videau, L.

    2017-10-01

    We present a model to calculate the electrical and thermal electronic conductivities in plasmas using the Chester-Thellung-Kubo-Greenwood approach coupled with the Kramers approximation. The divergence in photon energy at low values is eliminated using a regularization scheme with an effective energy-dependent electron-ion collision-frequency. Doing so, we interpolate smoothly between the Drude-like and the Spitzer-like regularizations. The model still satisfies the well-known sum rule over the electrical conductivity. Such kind of approximation is also naturally extended to the average-atom model. A particular attention is paid to the Lorenz number. Its nondegenerate and degenerate limits are given and the transition towards the Drude-like limit is proved in the Kramers approximation.

  20. Moving magnets in a micromagnetic finite-difference framework

    NASA Astrophysics Data System (ADS)

    Rissanen, Ilari; Laurson, Lasse

    2018-05-01

    We present a method and an implementation for smooth linear motion in a finite-difference-based micromagnetic simulation code, to be used in simulating magnetic friction and other phenomena involving moving microscale magnets. Our aim is to accurately simulate the magnetization dynamics and relative motion of magnets while retaining high computational speed. To this end, we combine techniques for fast scalar potential calculation and cubic b-spline interpolation, parallelizing them on a graphics processing unit (GPU). The implementation also includes the possibility of explicitly simulating eddy currents in the case of conducting magnets. We test our implementation by providing numerical examples of stick-slip motion of thin films pulled by a spring and the effect of eddy currents on the switching time of magnetic nanocubes.

  1. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  2. Mass and power modeling of communication satellites

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Pidgeon, David; Tsao, Alex

    1991-01-01

    Analytic estimating relationships for the mass and power requirements for major satellite subsystems are described. The model for each subsystem is keyed to the performance drivers and system requirements that influence their selection and use. Guidelines are also given for choosing among alternative technologies which accounts for other significant variables such as cost, risk, schedule, operations, heritage, and life requirements. These models are intended for application to first order systems analyses, where resources do not warrant detailed development of a communications system scenario. Given this ground rule, the models are simplified to 'smoothed' representation of reality. Therefore, the user is cautioned that cost, schedule, and risk may be significantly impacted where interpolations are sufficiently different from existing hardware as to warrant development of new devices.

  3. Uncertainty relation for the discrete Fourier transform.

    PubMed

    Massar, Serge; Spindel, Philippe

    2008-05-16

    We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.

  4. Resolving boosted jets with XCone

    DOE PAGES

    Thaler, Jesse; Wilkason, Thomas F.

    2015-12-01

    We show how the recently proposed XCone jet algorithm smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies $-$ dijet resonances,more » Higgs decays to bottom quarks, and all-hadronic top pairs$-$ that demonstrate the physics applications of XCone over a wide kinematic range.« less

  5. Using chaos to generate variations on movement sequences

    NASA Astrophysics Data System (ADS)

    Bradley, Elizabeth; Stuart, Joshua

    1998-12-01

    We describe a method for introducing variations into predefined motion sequences using a chaotic symbol-sequence reordering technique. A progression of symbols representing the body positions in a dance piece, martial arts form, or other motion sequence is mapped onto a chaotic trajectory, establishing a symbolic dynamics that links the movement sequence and the attractor structure. A variation on the original piece is created by generating a trajectory with slightly different initial conditions, inverting the mapping, and using special corpus-based graph-theoretic interpolation schemes to smooth any abrupt transitions. Sensitive dependence guarantees that the variation is different from the original; the attractor structure and the symbolic dynamics guarantee that the two resemble one another in both aesthetic and mathematical senses.

  6. Stimulated Raman and Brillouin scattering of polarization-smoothed and temporally smoothed laser beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, R.L.; Lefebvre, E.; Langdon, A.B.

    1999-04-01

    Control of filamentation and stimulated Raman and Brillouin scattering is shown to be possible by use of both spatial and temporal smoothing schemes. The spatial smoothing is accomplished by the use of phase plates [Y. Kato and K. Mima, Appl. Phys. {bold 329}, 186 (1982)] and polarization smoothing [Lefebvre {ital et al.}, Phys. Plasmas {bold 5}, 2701 (1998)] in which the plasma is irradiated with two orthogonally polarized, uncorrelated speckle patterns. The temporal smoothing considered here is smoothing by spectral dispersion [Skupsky {ital et al.}, J. Appl. Phys. {bold 66}, 3456 (1989)] in which the speckle pattern changes on themore » laser coherence time scale. At the high instability gains relevant to laser fusion experiments, the effect of smoothing must include the competition among all three instabilities. {copyright} {ital 1999 American Institute of Physics.}« less

  7. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  8. Apparatus and method for measuring the thickness of a coating

    DOEpatents

    Carlson, Nancy M.; Johnson, John A.; Tow, David M.; Walter, John B

    2002-01-01

    An apparatus and method for measuring the thickness of a coating adhered to a substrate. An electromagnetic acoustic transducer is used to induce surface waves into the coating. The surface waves have a selected frequency and a fixed wavelength. Interpolation is used to determine the frequency of surface waves that propagate through the coating with the least attenuation. The phase velocity of the surface waves having this frequency is then calculated. The phase velocity is compared to known phase velocity/thickness tables to determine the thickness of the coating.

  9. A method for reducing sampling jitter in digital control systems

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.; HURBD W. J.; Hurd, W. J.

    1969-01-01

    Digital phase lock loop system is designed by smoothing the proportional control with a low pass filter. This method does not significantly affect the loop dynamics when the smoothing filter bandwidth is wide compared to loop bandwidth.

  10. Symmetric and arbitrarily high-order Birkhoff-Hermite time integrators and their long-time behaviour for solving nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Iserles, Arieh; Wu, Xinyuan

    2018-03-01

    The Klein-Gordon equation with nonlinear potential occurs in a wide range of application areas in science and engineering. Its computation represents a major challenge. The main theme of this paper is the construction of symmetric and arbitrarily high-order time integrators for the nonlinear Klein-Gordon equation by integrating Birkhoff-Hermite interpolation polynomials. To this end, under the assumption of periodic boundary conditions, we begin with the formulation of the nonlinear Klein-Gordon equation as an abstract second-order ordinary differential equation (ODE) and its operator-variation-of-constants formula. We then derive a symmetric and arbitrarily high-order Birkhoff-Hermite time integration formula for the nonlinear abstract ODE. Accordingly, the stability, convergence and long-time behaviour are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix, subject to suitable temporal and spatial smoothness. A remarkable characteristic of this new approach is that the requirement of temporal smoothness is reduced compared with the traditional numerical methods for PDEs in the literature. Numerical results demonstrate the advantage and efficiency of our time integrators in comparison with the existing numerical approaches.

  11. Fiber orientation interpolation for the multiscale analysis of short fiber reinforced composite parts

    NASA Astrophysics Data System (ADS)

    Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf

    2018-06-01

    For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.

  12. Fiber orientation interpolation for the multiscale analysis of short fiber reinforced composite parts

    NASA Astrophysics Data System (ADS)

    Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf

    2018-04-01

    For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.

  13. SU-E-J-238: First-Order Approximation of Time-Resolved 4DMRI From Cine 2DMRI and Respiratory-Correlated 4DMRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Tyagi, N; Deasy, J

    2015-06-15

    Purpose: Cine 2DMRI is useful in MR-guided radiotherapy but it lacks volumetric information. We explore the feasibility of estimating timeresolved (TR) 4DMRI based on cine 2DMRI and respiratory-correlated (RC) 4DMRI though simulation. Methods: We hypothesize that a volumetric image during free breathing can be approximated by interpolation among 3DMRI image sets generated from a RC-4DMRI. Two patients’ RC-4DMRI with 4 or 5 phases were used to generate additional 3DMRI by interpolation. For each patient, six libraries were created to have total 5-to-35 3DMRI images by 0–6 equi-spaced tri-linear interpolation between adjacent and full-inhalation/full-exhalation phases. Sagittal cine 2DMRI were generated frommore » reference 3DMRIs created from separate, unique interpolations from the original RC-4DMRI. To test if accurate 3DMRI could be generated through rigid registration of the cine 2DMRI to the 3DMRI libraries, each sagittal 2DMRI was registered to sagittal cuts in the same location in the 3DMRI within each library to identify the two best matches: one with greater lung volume and one with smaller. A final interpolation between the corresponding 3DMRI was then performed to produce the first-order-approximation (FOA) 3DMRI. The quality and performance of the FOA as a function of library size was assessed using both the difference in lung volume and average voxel intensity between the FOA and the reference 3DMRI. Results: The discrepancy between the FOA and reference 3DMRI decreases as the library size increases. The 3D lung volume difference decreases from 5–15% to 1–2% as the library size increases from 5 to 35 image sets. The average difference in lung voxel intensity decreases from 7–8 to 5–6 with the lung intensity being 0–135. Conclusion: This study indicates that the quality of FOA 3DMRI improves with increasing 3DMRI library size. On-going investigations will test this approach using actual cine 2DMRI and introduce a higher order approximation for improvements. This study is in part supported by NIH (U54CA137788 and U54CA132378)« less

  14. Zero-crossing sampling of Fourier-transform interferograms and spectrum reconstruction using the real-zero interpolation method.

    PubMed

    Minami, K; Kawata, S; Minami, S

    1992-10-10

    The real-zero interpolation method is applied to a Fourier-transformed infrared (FT-IR) interferogram. With this method an interferogram is reconstructed from its zero-crossing information only, without the use of a long-word analog-to-digital converter. We installed a phase-locked loop circuit into an FT-IR spectrometer for oversampling the interferogram. Infrared absorption spectra of polystyrene and Mylar films were measured as binary interferograms by the FT-IR spectrometer, which was equipped with the developed circuits, and their Fourier spectra were successfully reconstructed. The relationship of the oversampling ratio to the dynamic range of the reconstructed interferogram was evaluated through computer simulations. We also discuss the problems of this method for practical applications.

  15. A novel power harmonic analysis method based on Nuttall-Kaiser combination window double spectrum interpolated FFT algorithm

    NASA Astrophysics Data System (ADS)

    Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.

    2017-11-01

    Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.

  16. Optimal control of the gear shifting process for shift smoothness in dual-clutch transmissions

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Görges, Daniel

    2018-03-01

    The control of the transmission system in vehicles is significant for the driving comfort. In order to design a controller for smooth shifting and comfortable driving, a dynamic model of a dual-clutch transmission is presented in this paper. A finite-time linear quadratic regulator is proposed for the optimal control of the two friction clutches in the torque phase for the upshift process. An integral linear quadratic regulator is introduced to regulate the relative speed difference between the engine and the slipping clutch under the optimization of the input torque during the inertia phase. The control objective focuses on smoothing the upshift process so as to improve the driving comfort. Considering the available sensors in vehicles for feedback control, an observer design is presented to track the immeasurable variables. Simulation results show that the jerk can be reduced both in the torque phase and inertia phase, indicating good shift performance. Furthermore, compared with conventional controllers for the upshift process, the proposed control method can reduce shift jerk and improve shift quality.

  17. Phase transitions in restricted Boltzmann machines with generic priors

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele

    2017-10-01

    We study generalized restricted Boltzmann machines with generic priors for units and weights, interpolating between Boolean and Gaussian variables. We present a complete analysis of the replica symmetric phase diagram of these systems, which can be regarded as generalized Hopfield models. We underline the role of the retrieval phase for both inference and learning processes and we show that retrieval is robust for a large class of weight and unit priors, beyond the standard Hopfield scenario. Furthermore, we show how the paramagnetic phase boundary is directly related to the optimal size of the training set necessary for good generalization in a teacher-student scenario of unsupervised learning.

  18. Räumlich-statistische Charakterisierung der Hydrogeochemie einer BTEX-Grundwasserkontamination am Standort "RETZINA"/Zeitz

    NASA Astrophysics Data System (ADS)

    Wachter, Thorsten; Dethlefsen, Frank; Gödeke, Stefan; Dahmke, Andreas

    Kurzfassung Bei der Erkundung von Grundwasserschadensfällen werden gewöhnlich punktuell erhobene Messdaten über den betrachteten Aquifer interpoliert. Die Reichweite der Autokorrelation der Daten entscheidet dabei über Zulässigkeit und Qualität der Interpolation. Während auf regionalem Maßstab Daten vorliegen, gibt es nur sehr wenig Arbeiten über Korrelationslängen auf lokalem Maßstab. Diese Arbeit untersucht die Interpolationssicherheit am Beispiel der BTEX-Kontamination am Testfeld RETZINA in Zeitz anhand von Variogrammen und Krigingstandardabweichung. Als Datengrundlage dienten Analysenergebnisse aus über 50 Grundwassermessstellen und 180 Sedimentproben. Für Benzol, Methan, Sulfat und Alkalität genügte der mittlere Abstand der Probennahmepunkte von etwa 50 m für hinreichende Interpolationssicherheit, im Falle von gelöstem Eisen, gelöstem Mangan und Redoxpotenzial war die vorhandene Datendichte zu gering, für die Hauptionen, pH-Wert und elektr. Leitfähigkeit hätten bereits größere Probenabstände ausgereicht. Die Reichweiten der Autokorrelation im Sediment sind geringer als die Probennahmeabstände, weshalb die Messwerte der Eisen- und Schwefelbindungsformen nicht regionalisiert werden konnten. Die Ergebnisse sind hinsichtlich der Entwicklung von Probennahmestrategien von Bedeutung. Zur Verbesserung der Übertragbarkeit bedarf es noch ähnlicher Untersuchungen an weiteren Standorten. Investigation of sites with groundwater contamination usually involves interpolation of point-based results across the aquifer under consideration. The range of autocorrelation thereby decides whether the quality of interpolation will be sufficient. While there are investigations on the regional (km) scale, only a few publications exist for correlation lengths at a local (10s to 100s of m) scale. This study investigates the accuracy of interpolation at the example of the BTEX-contaminated aquifer in Zeitz by means of variogram analysis and interpretation of kriging standard deviation. Investigations are based upon analytical results from more than 50 groundwater wells and 180 sediment samples. In the case of benzene, methane, sulphate, and alkalinity, the given sample spacing of about 50 m provided sufficient accuracy of interpolation. For dissolved iron, dissolved manganese, and redox potential the results did not allow interpolation, while for the major ions, pH, and electrical conductivity a wider sampling distance would have been possible. The ranges of autocorrelation in the aquifer's solid phase proved to be smaller than the sampling distances. Therefore, results of iron and sulphur phases could not be regionalized. The results of this study are relevant for the development of sampling strategies and quality management. To improve transferability similar investigations should be done at other sites.

  19. Quantum phases of a vortex string.

    PubMed

    Auzzi, Roberto; Prem Kumar, S

    2009-12-04

    We argue that the world sheet dynamics of magnetic k strings in the Higgs phase of the mass-deformed N = 4 theory is controlled by a bosonic O(3) sigma model with anisotropy and a topological theta term. The theory interpolates between a massless O(2) symmetric regime, a massive O(3) symmetric phase, and another massive phase with a spontaneously broken Z(2) symmetry. The first two phases are separated by a Kosterlitz-Thouless transition. When theta = pi, the O(3) symmetric phase flows to an interacting fixed point; sigma model kinks and their dyonic partners become degenerate, mirroring the behavior of monopoles in the parent gauge theory. This leads to the identification of the kinks with monopoles confined on the string.

  20. Ring magnet firing angle control

    DOEpatents

    Knott, M.J.; Lewis, L.G.; Rabe, H.H.

    1975-10-21

    A device is provided for controlling the firing angles of thyratrons (rectifiers) in a ring magnet power supply. A phase lock loop develops a smooth ac signal of frequency equal to and in phase with the frequency of the voltage wave developed by the main generator of the power supply. A counter that counts from zero to a particular number each cycle of the main generator voltage wave is synchronized with the smooth AC signal of the phase lock loop. Gates compare the number in the counter with predetermined desired firing angles for each thyratron and with coincidence the proper thyratron is fired at the predetermined firing angle.

  1. Limited Range Sesame EOS for Ta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greeff, Carl William; Crockett, Scott; Rudin, Sven Peter

    2017-03-30

    A new Sesame EOS table for Ta has been released for testing. It is a limited range table covering T ≤ 26, 000 K and ρ ≤ 37.53 g/cc. The EOS is based on earlier analysis using DFT phonon calculations to infer the cold pressure from the Hugoniot. The cold curve has been extended into compression using new DFT calculations. The present EOS covers expansion into the gas phase. It is a multi-phase EOS with distinct liquid and solid phases. A cold shear modulus table (431) is included. This is based on an analytic interpolation of DFT calculations.

  2. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  3. The Effect of Multigrid Parameters in a 3D Heat Diffusion Equation

    NASA Astrophysics Data System (ADS)

    Oliveira, F. De; Franco, S. R.; Pinto, M. A. Villela

    2018-02-01

    The aim of this paper is to reduce the necessary CPU time to solve the three-dimensional heat diffusion equation using Dirichlet boundary conditions. The finite difference method (FDM) is used to discretize the differential equations with a second-order accuracy central difference scheme (CDS). The algebraic equations systems are solved using the lexicographical and red-black Gauss-Seidel methods, associated with the geometric multigrid method with a correction scheme (CS) and V-cycle. Comparisons are made between two types of restriction: injection and full weighting. The used prolongation process is the trilinear interpolation. This work is concerned with the study of the influence of the smoothing value (v), number of mesh levels (L) and number of unknowns (N) on the CPU time, as well as the analysis of algorithm complexity.

  4. Super Yang Mills, matrix models and geometric transitions

    NASA Astrophysics Data System (ADS)

    Ferrari, Frank

    2005-03-01

    I explain two applications of the relationship between four-dimensional N=1 supersymmetric gauge theories, zero-dimensional gauged matrix models, and geometric transitions in string theory. The first is related to the spectrum of BPS domain walls or BPS branes. It is shown that one can smoothly interpolate between a D-brane state, whose weak coupling tension scales as N˜1/g, and a closed string solitonic state, whose weak coupling tension scales as N˜1/gs2. This is part of a larger theory of N=1 quantum parameter spaces. The second is a new purely geometric approach to sum exactly over planar diagrams in zero dimension. It is an example of open/closed string duality. To cite this article: F. Ferrari, C. R. Physique 6 (2005).

  5. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    NASA Astrophysics Data System (ADS)

    Fowlie, Andrew

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  6. Nonperturbative renormalization of the axial current in Nf=3 lattice QCD with Wilson fermions and a tree-level improved gauge action

    NASA Astrophysics Data System (ADS)

    Bulava, John; Della Morte, Michele; Heitger, Jochen; Wittemeier, Christian

    2016-06-01

    We nonperturbatively determine the renormalization factor of the axial vector current in lattice QCD with Nf=3 flavors of Wilson-clover fermions and the tree-level Symanzik-improved gauge action. The (by now standard) renormalization condition is derived from the massive axial Ward identity, and it is imposed among Schrödinger functional states with large overlap on the lowest lying hadronic state in the pseudoscalar channel, in order to reduce kinematically enhanced cutoff effects. We explore a range of couplings relevant for simulations at lattice spacings of ≈0.09 fm and below. An interpolation formula for ZA(g02) , smoothly connecting the nonperturbative values to the 1-loop expression, is provided together with our final results.

  7. Application of Nondestructive Testing Techniques to Materials Testing.

    DTIC Science & Technology

    1984-01-01

    describes enjoy the same luxury. Real-time interpolation of the data set the dynamic-range limitations of real-time digital imaging sys - terns due to...delta encoded, ELEMENT %’. ~ ._________ litlcrcrtnen t) hr t is currtent value. I n DAI SY . thle accumulator is% a sitmple up-co uriter, and the binary...roughly 36 A. Phase Qrittization antd Phase Qiuan:ti, ’o / rr, r kilolbytes of focus mem-ory. 2 Such an architecture requtres For :i sy -,𔃾 @tl V.it)1 M)0

  8. Vortices and quasiparticles near the superconductor-insulator transition in thin films.

    PubMed

    Galitski, Victor M; Refael, G; Fisher, Matthew P A; Senthil, T

    2005-08-12

    We study the low temperature behavior of an amorphous superconducting film driven normal by a perpendicular magnetic-field (B). For this purpose we introduce a new two-fluid formulation consisting of fermionized field-induced vortices and electrically neutralized Bogoliubov quasiparticles (spinons) interacting via a long-ranged statistical interaction. This approach allows us to access a novel non-Fermi-liquid phase, which naturally interpolates between the low B superconductor and the high B normal metal. We discuss the properties of the resulting "vortex metal" phase.

  9. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  10. Spatiotemporal Visualization of Time-Series Satellite-Derived CO2 Flux Data Using Volume Rendering and Gpu-Based Interpolation on a Cloud-Driven Digital Earth

    NASA Astrophysics Data System (ADS)

    Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.

    2017-10-01

    The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  11. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less

  12. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  13. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  14. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  15. Ground-state energy of an exciton-(LO) phonon system in a parabolic quantum well

    NASA Astrophysics Data System (ADS)

    Gerlach, B.; Wüsthoff, J.; Smondyrev, M. A.

    1999-12-01

    This paper presents a variational study of the ground-state energy of an exciton-(LO) phonon system, which is spatially confined to a quantum well. The exciton-phonon interaction is of Fröhlich type, the confinement potentials are assumed to be parabolic functions of the coordinates. Making use of functional integral techniques, the phonon part of the problem can be eliminated exactly, leading us to an effective two-particle system, which has the same spectral properties as the original one. Subsequently, Jensen's inequality is applied to obtain an upper bound on the ground-state energy. The main intention of this paper is to analyze the influence of the quantum-well-induced localization of the exciton on its ground-state energy (or its binding energy, respectively). To do so, we neglect any mismatch of the masses or the dielectric constants, but admit an arbitrary strength of the confinement potentials. Our approach allows for a smooth interpolation of the ultimate limits of vanishing and infinite confinement, corresponding to the cases of a free three-dimensional and a free two-dimensional exciton-phonon system. The interpolation formula for the ground-state energy bound corresponds to similar formulas for the free polaron or the free exciton-phonon system. These bounds in turn are known to compare favorably with all previous ones, which we are aware of.

  16. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  17. Using a short-pulse diffraction-limited laser beam to probe filamentation of a random phase plate smoothed beam.

    PubMed

    Kline, J L; Montgomery, D S; Flippo, K A; Johnson, R P; Rose, H A; Shimada, T; Williams, E A

    2008-10-01

    A short pulse (few picoseconds) laser probe provides high temporal resolution measurements to elucidate details of fast dynamic phenomena not observable with typical longer laser pulse probes and gated diagnostics. Such a short pulse laser probe (SPLP) has been used to measure filamentation of a random phase plate (RPP) smoothed laser beam in a gas-jet plasma. The plasma index of refraction due to driven density and temperature fluctuations by the RPP beam perturbs the phase front of a SPLP propagating at a 90 degree angle with respect to the RPP interaction beam. The density and temperature fluctuations are quasistatic on the time scale of the SPLP (approximately 2 ps). The transmitted near-field intensity distribution from the SPLP provides a measure of the phase front perturbation. At low plasma densities, the transmitted intensity pattern is asymmetric with striations across the entire probe beam in the direction of the RPP smoothed beam. As the plasma density increases, the striations break up into smaller sizes along the direction of the RPP beam propagation. The breakup of the intensity pattern is consistent with self-focusing of the RPP smoothed interaction beam. Simulations of the experiment using the wave propagation code, PF3D, are in qualitative agreement demonstrating that the asymmetric striations can be attributed to the RPP driven density fluctuations. Quantification of the beam breakup measured by the transmitted SPLP could lead to a new method for measuring self-focusing of lasers in underdense plasmas.

  18. Applications of compressed sensing image reconstruction to sparse view phase tomography

    NASA Astrophysics Data System (ADS)

    Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian

    2017-10-01

    X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.

  19. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  20. SSD with generalized phase modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rothenberg, J.

    1996-01-09

    Smoothing by spectral dispersion (SSD) with standard frequency modulation (FM), although simple to implement, has the disadvantage that low spatial frequencies present in the spectrum of the target illumination are not smoothed as effectively as with a more general smoothing method (eg, induced spatial incoherence method). The reduced smoothing performance of standard FM-SSD can result in spectral power of the speckle noise at these low spatial frequencies as much as one order of magnitude larger than that achieved with a more general method. In fact, at small integration times FM-SSD has no smoothing effect at all for a broad bandmore » of low spatial frequencies. This effect may have important implications for both direct and indirect drive ICF.« less

  1. A bivariate rational interpolation with a bi-quadratic denominator

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu

    2006-10-01

    In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.

  2. Interpolator for numerically controlled machine tools

    DOEpatents

    Bowers, Gary L.; Davenport, Clyde M.; Stephens, Albert E.

    1976-01-01

    A digital differential analyzer circuit is provided that depending on the embodiment chosen can carry out linear, parabolic, circular or cubic interpolation. In the embodiment for parabolic interpolations, the circuit provides pulse trains for the X and Y slide motors of a two-axis machine to effect tool motion along a parabolic path. The pulse trains are generated by the circuit in such a way that parabolic tool motion is obtained from information contained in only one block of binary input data. A part contour may be approximated by one or more parabolic arcs. Acceleration and initial velocity values from a data block are set in fixed bit size registers for each axis separately but simultaneously and the values are integrated to obtain the movement along the respective axis as a function of time. Integration is performed by continual addition at a specified rate of an integrand value stored in one register to the remainder temporarily stored in another identical size register. Overflows from the addition process are indicative of the integral. The overflow output pulses from the second integration may be applied to motors which position the respective machine slides according to a parabolic motion in time to produce a parabolic machine tool motion in space. An additional register for each axis is provided in the circuit to allow "floating" of the radix points of the integrand registers and the velocity increment to improve position accuracy and to reduce errors encountered when the acceleration integrand magnitudes are small when compared to the velocity integrands. A divider circuit is provided in the output of the circuit to smooth the output pulse spacing and prevent motor stall, because the overflow pulses produced in the binary addition process are spaced unevenly in time. The divider has the effect of passing only every nth motor drive pulse, with n being specifiable. The circuit inputs (integrands, rates, etc.) are scaled to give exactly n times the desired number of pulses out, in order to compensate for the divider.

  3. A Meshless Method for Magnetohydrodynamics and Applications to Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    McNally, Colin P.

    2012-08-01

    This thesis presents an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. The code has been parallelized by adapting the framework provided by Gadget-2. A set of standard test problems, including one part in a million amplitude linear MHD waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities are presented. Finally we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. We provide a rigorous methodology for verifying a numerical method on two dimensional Kelvin-Helmholtz instability. The test problem was run in the Pencil Code, Athena, Enzo, NDSPHMHD, and Phurbas. A strict comparison, judgment, or ranking, between codes is beyond the scope of this work, although this work provides the mathematical framewor! k needed for such a study. Nonetheless, how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of Smoothed Particle Hydrodynamics. We then comment on the connection between this behavior and the underlying lack of zeroth-order consistency in Smoothed Particle Hydrodynamics interpolation. In astrophysical magnetohydrodynamics (MHD) and electrodynamics simulations, numerically enforcing the divergence free constraint on the magnetic field has been difficult. We observe that for point-based discretization, as used in finite-difference type and pseudo-spectral methods, the divergence free constraint can be satisfied entirely by a choice of interpolation used to define the derivatives of the magnetic field. As an example we demonstrate a new class of finite-difference type derivative operators on a regular grid which has the divergence free property. This principle clarifies the nature of magnetic monopole errors. The principles and techniques demonstrated in this chapter are particularly useful for the magnetic field, but can be applied to any vector field. Finally, we examine global zoom-in simulations of turbulent magnetorotationally unstable flow. We extract and analyze the high-current regions produced in the turbulent flow. Basic parameters of these regions are abstracted, and we build one dimensional models including non-ideal MHD, and radiative transfer. For sufficiently high temperatures, an instability resulting from the temperature dependence of the Ohmic resistivity is found. This instability concentrates current sheets, resulting in the possibility of rapid heating from temperatures on the order of 600 Kelvin to 2000 Kelvin in magnetorotationally turbulent regions of protoplanetary disks. This is a possible local mechanism for the melting of chondrules and the formation of other high-temperature materials in protoplanetary disks.

  4. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  5. Three-dimensional modelling and geothermal process simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, K.L.

    1990-01-01

    The subsurface geological model or 3-D GIS is constructed from three kinds of objects, which are a lithotope (in boundary representation), a number of fault systems, and volumetric textures (vector fields). The chief task of the model is to yield an estimate of the conductance tensors (fluid permeability and thermal conductivity) throughout an array of voxels. This is input as material properties to a FEHM numerical physical process model. The main task of the FEHM process model is to distinguish regions of convective from regions of conductive heat flow, and to estimate the fluid phase, pressure and flow paths. Themore » temperature, geochemical, and seismic data provide the physical constraints on the process. The conductance tensors in the Franciscan Complex are to be derived by the addition of two components. The isotropic component is a stochastic spatial variable due to disruption of lithologies in melange. The deviatoric component is deterministic, due to smoothness and continuity in the textural vector fields. This decomposition probably also applies to the engineering hydrogeological properties of shallow terrestrial fluvial systems. However there are differences in quantity. The isotropic component is much more variable in the Franciscan, to the point where volumetric averages are misleading, and it may be necessary to select that component from several, discrete possible states. The deviatoric component is interpolated using a textural vector field. The Franciscan field is much more complicated, and contains internal singularities. 27 refs., 10 figs.« less

  6. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  7. A better GRACE solution for improving the regional Greenland mass balance

    NASA Astrophysics Data System (ADS)

    Schrama, E.; Xu, Z.

    2012-04-01

    In most GRACE based researches, a variety of smoothing methods is employed to remove alternating bands of positive and negative stripes stretching in the north-south direction. Many studies have suggested to smooth the GRACE maps, on which mass variations are represented as equivalent water height (EWH). Such maps are capable of exposing the redistribution of earth surface mass over time. In Greenland the shrinking of the ice cap becomes significant in the last decade. Our present study confirms that the dominating melting trends are in the east and southeast coastal zones, however, the smoothed signals along the coastline in these areas do not represent the original but averaged measurements from GRACE satellites which means the signal strength indicating that negative mass variations are mixed with some positive signals that are very close to this area. An exact identification of the topographic edge is not possible and visually the EWH maps appear to be blurred. To improve this, we firstly used spherical harmonic coefficients of GRACE level-2 data from CSR-RL04 and produced a smoothed EWH map. Empirical Orthogonal Functions(EOF)/Principal Component Analysis(PCA) have been introduced as well, in order to extract the melting information associated with the recent warming climate. Next, the Greenland area is redefined by 16 basins and the corresponding melting zones are quantified respectively. Least Squares methods are invoked to interpolate the mass distribution function on each basin. In this way we are able to estimate more accurately regional ice melting rate and we sharpen the EWH map. After comparing our results with a hydrological model the combination SMB - D is established which contains the surface mass balance (SMB) and ice-discharge (D). A general agreement can be reached and it turns out this method is capable to enhance our understanding of the shrinking global cryosphere

  8. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  9. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  10. Post-Dryout Heat Transfer to a Refrigerant Flowing in Horizontal Evaporator Tubes

    NASA Astrophysics Data System (ADS)

    Mori, Hideo; Yoshida, Suguru; Kakimoto, Yasushi; Ohishi, Katsumi; Fukuda, Kenichi

    Studies of the post-dryout heat transfer were made based on the experimental data for HFC-134a flowing in horizontal smooth and spiral1y grooved (micro-fin) tubes and the characteristics of the post-dryout heat transfer were c1arified. The heat transfer coefficient at medium and high mass flow rates in the smooth tube was lower than the single-phase heat transfer coefficient of the superheated vapor flow, of which mass flow rate was given on the assumption that the flow was in a thermodynamic equilibrium. A prediction method of post-dryout heat transfer coefficient was developed to reproduce the measurement satisfactorily for the smooth tube. The post dryout heat transfer in the micro-fin tube can be regarded approximately as a superheated vapor single-phase heat transfer.

  11. ProteinShader: illustrative rendering of macromolecules

    PubMed Central

    Weber, Joseph R

    2009-01-01

    Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660

  12. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  13. Effect of smoothing on robust chaos.

    PubMed

    Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae

    2010-08-01

    In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.

  14. Jet angularity measurements for single inclusive jet production

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Lee, Kyle; Ringer, Felix

    2018-04-01

    We study jet angularity measurements for single-inclusive jet production at the LHC. Jet angularities depend on a continuous parameter a allowing for a smooth interpolation between different traditional jet shape observables. We establish a factorization theorem within Soft Collinear Effective Theory (SCET) where we consistently take into account in- and out-of-jet radiation by making use of semi-inclusive jet functions. For comparison, we elaborate on the differences to jet angularities measured on an exclusive jet sample. All the necessary ingredients for the resummation at next-to-leading logarithmic (NLL) accuracy are presented within the effective field theory framework. We expect semiinclusive jet angularity measurements to be feasible at the LHC and we present theoretical predictions for the relevant kinematic range. In addition, we investigate the potential impact of jet angularities for quark-gluon discrimination.

  15. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian.more » By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.« less

  16. Potential energy surface interpolation with neural networks for instanton rate calculations

    NASA Astrophysics Data System (ADS)

    Cooper, April M.; Hallmen, Philipp P.; Kästner, Johannes

    2018-03-01

    Artificial neural networks are used to fit a potential energy surface (PES). We demonstrate the benefits of using not only energies but also their first and second derivatives as training data for the neural network. This ensures smooth and accurate Hessian surfaces, which are required for rate constant calculations using instanton theory. Our aim was a local, accurate fit rather than a global PES because instanton theory requires information on the potential only in the close vicinity of the main tunneling path. Elongations along vibrational normal modes at the transition state are used as coordinates for the neural network. The method is applied to the hydrogen abstraction reaction from methanol, calculated on a coupled-cluster level of theory. The reaction is essential in astrochemistry to explain the deuteration of methanol in the interstellar medium.

  17. Topological resolution of gauge theory singularities

    NASA Astrophysics Data System (ADS)

    Saracco, Fabio; Tomasiello, Alessandro; Torroba, Gonzalo

    2013-08-01

    Some gauge theories with Coulomb branches exhibit singularities in perturbation theory, which are usually resolved by nonperturbative physics. In string theory this corresponds to the resolution of timelike singularities near the core of orientifold planes by effects from F or M theory. We propose a new mechanism for resolving Coulomb branch singularities in three-dimensional gauge theories, based on Chern-Simons interactions. This is illustrated in a supersymmetric SU(2) Yang-Mills-Chern-Simons theory. We calculate the one-loop corrections to the Coulomb branch of this theory and find a result that interpolates smoothly between the high-energy metric (that would exhibit the singularity) and a regular singularity-free low-energy result. We suggest possible applications to singularity resolution in string theory and speculate a relationship to a similar phenomenon for the orientifold six-plane in massive IIA supergravity.

  18. Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations

    NASA Technical Reports Server (NTRS)

    Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.

    1993-01-01

    The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.

  19. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  20. Nonuniform transmission line codirectional couplers for hybrid MIMIC and superconductive applications

    NASA Astrophysics Data System (ADS)

    Uysal, Sener; Turner, Charles W.; Watkins, John

    1994-03-01

    A new design approach for thin-film codirectional quadrature couplers and their applications is described. An in-depth analysis and semi-empirical design curves are presented for these couplers. Forward-wave coupling is achieved by making use of the difference between even- and odd-mode phase velocities. Modified nonuniform codirectional couplers with a dummy channel for continuously decreasing or increasing taper and employing wiggly, serpentined and smooth coupled edges have been designed and tested. It is found that a wiggly coupler can achieve a 50% length reduction compared to a smooth-edge coupler. A further 60% length reduction compared to a wiggly coupler is achieved by a serpentine coupler. Coupler performance for wiggly and serpentined configurations is computed by choosing a realizable phase velocity function for a given coupler length. Either constant 90deg or - 90deg phase shift is possible with these couplers giving significant design flexibility in some applications. The results for a K(sub u)-band Sigma-Delta Magic-T circuit employing a 0 dB wiggly coupler and a - 3 dB smooth-edge coupler are also presented.

  1. Phasic Contractions of the Mouse Vagina and Cervix at Different Phases of the Estrus Cycle and during Late Pregnancy

    PubMed Central

    Gravina, Fernanda S.; van Helden, Dirk F.; Kerr, Karen P.; de Oliveira, Ramatis B.; Jobling, Phillip

    2014-01-01

    Background/Aims The pacemaker mechanisms activating phasic contractions of vaginal and cervical smooth muscle remain poorly understood. Here, we investigate properties of pacemaking in vaginal and cervical tissues by determining whether: 1) functional pacemaking is dependent on the phase of the estrus cycle or pregnancy; 2) pacemaking involves Ca2+ release from sarcoplasmic/endoplasmic reticulum Ca2+-ATPase (SERCA) -dependent intracellular Ca2+ stores; and 3) c-Kit and/or vimentin immunoreactive ICs have a role in pacemaking. Methodology/Principal Findings Vaginal and cervical contractions were measured in vitro, as was the distribution of c-Kit and vimentin positive interstitial cells (ICs). Cervical smooth muscle was spontaneously active in estrus and metestrus but quiescent during proestrus and diestrus. Vaginal smooth muscle was normally quiescent but exhibited phasic contractions in the presence of oxytocin or the K+ channel blocker tetraethylammonium (TEA) chloride. Spontaneous contractions in the cervix and TEA-induced phasic contractions in the vagina persisted in the presence of cyclopiazonic acid (CPA), a blocker of the SERCA that refills intracellular SR Ca2+ stores, but were inhibited in low Ca2+ solution or in the presence of nifedipine, an inhibitor of L-type Ca2+channels. ICs were found in small numbers in the mouse cervix but not in the vagina. Conclusions/Significance Cervical smooth muscle strips taken from mice in estrus, metestrus or late pregnancy were generally spontaneously active. Vaginal smooth muscle strips were normally quiescent but could be induced to exhibit phasic contractions independent on phase of the estrus cycle or late pregnancy. Spontaneous cervical or TEA-induced vaginal phasic contractions were not mediated by ICs or intracellular Ca2+ stores. Given that vaginal smooth muscle is normally quiescent then it is likely that increases in hormones such as oxytocin, as might occur through sexual stimulation, enhance the effectiveness of such pacemaking until phasic contractile activity emerges. PMID:25337931

  2. Phasic contractions of the mouse vagina and cervix at different phases of the estrus cycle and during late pregnancy.

    PubMed

    Gravina, Fernanda S; van Helden, Dirk F; Kerr, Karen P; de Oliveira, Ramatis B; Jobling, Phillip

    2014-01-01

    The pacemaker mechanisms activating phasic contractions of vaginal and cervical smooth muscle remain poorly understood. Here, we investigate properties of pacemaking in vaginal and cervical tissues by determining whether: 1) functional pacemaking is dependent on the phase of the estrus cycle or pregnancy; 2) pacemaking involves Ca2+ release from sarcoplasmic/endoplasmic reticulum Ca2+-ATPase (SERCA) -dependent intracellular Ca2+ stores; and 3) c-Kit and/or vimentin immunoreactive ICs have a role in pacemaking. Vaginal and cervical contractions were measured in vitro, as was the distribution of c-Kit and vimentin positive interstitial cells (ICs). Cervical smooth muscle was spontaneously active in estrus and metestrus but quiescent during proestrus and diestrus. Vaginal smooth muscle was normally quiescent but exhibited phasic contractions in the presence of oxytocin or the K+ channel blocker tetraethylammonium (TEA) chloride. Spontaneous contractions in the cervix and TEA-induced phasic contractions in the vagina persisted in the presence of cyclopiazonic acid (CPA), a blocker of the SERCA that refills intracellular SR Ca2+ stores, but were inhibited in low Ca2+ solution or in the presence of nifedipine, an inhibitor of L-type Ca2+channels. ICs were found in small numbers in the mouse cervix but not in the vagina. Cervical smooth muscle strips taken from mice in estrus, metestrus or late pregnancy were generally spontaneously active. Vaginal smooth muscle strips were normally quiescent but could be induced to exhibit phasic contractions independent on phase of the estrus cycle or late pregnancy. Spontaneous cervical or TEA-induced vaginal phasic contractions were not mediated by ICs or intracellular Ca2+ stores. Given that vaginal smooth muscle is normally quiescent then it is likely that increases in hormones such as oxytocin, as might occur through sexual stimulation, enhance the effectiveness of such pacemaking until phasic contractile activity emerges.

  3. Improvements to the fastex flutter analysis computer code

    NASA Technical Reports Server (NTRS)

    Taylor, Ronald F.

    1987-01-01

    Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.

  4. Super-resolution reconstruction for 4D computed tomography of the lung via the projections onto convex sets approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei

    2014-11-01

    Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less

  5. High-resolution correlation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.

    2007-09-01

    In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.

  6. Composite Pillars with a Tunable Interface for Adhesion to Rough Substrates

    PubMed Central

    2016-01-01

    The benefits of synthetic fibrillar dry adhesives for temporary and reversible attachment to hard objects with smooth surfaces have been successfully demonstrated in previous studies. However, surface roughness induces a dramatic reduction in pull-off stresses and necessarily requires revised design concepts. Toward this aim, we introduce cylindrical two-phase single pillars, which are composed of a mechanically stiff stalk and a soft tip layer. Adhesion to smooth and rough substrates is shown to exceed that of conventional pillar structures. The adhesion characteristics can be tuned by varying the thickness of the soft tip layer, the ratio of the Young’s moduli and the curvature of the interface between the two phases. For rough substrates, adhesion values similar to those obtained on smooth substrates were achieved. Our concept of composite pillars overcomes current practical limitations caused by surface roughness and opens up fields of application where roughness is omnipresent. PMID:27997118

  7. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  8. Accelerated path-integral simulations using ring-polymer interpolation

    NASA Astrophysics Data System (ADS)

    Buxton, Samuel J.; Habershon, Scott

    2017-12-01

    Imaginary-time path-integral (PI) molecular simulations can be used to calculate exact quantum statistical mechanical properties for complex systems containing many interacting atoms and molecules. The limiting computational factor in a PI simulation is typically the evaluation of the potential energy surface (PES) and forces at each ring-polymer "bead"; for an n-bead ring-polymer, a PI simulation is typically n times greater than the corresponding classical simulation. To address the increased computational effort of PI simulations, several approaches have been developed recently, most notably based on the idea of ring-polymer contraction which exploits either the separation of the PES into short-range and long-range contributions or the availability of a computationally inexpensive PES which can be incorporated to effectively smooth the ring-polymer PES; neither approach is satisfactory in applications to systems modeled by PESs given by on-the-fly ab initio calculations. In this article, we describe a new method, ring-polymer interpolation (RPI), which can be used to accelerate PI simulations without any prior assumptions about the PES. In simulations of liquid water modeled by an empirical PES (or force field) under ambient conditions, where quantum effects are known to play a subtle role in influencing experimental observables such as radial distribution functions, we find that RPI can accurately reproduce the results of fully-converged PI simulations, albeit with far fewer PES evaluations. This approach therefore opens the possibility of large-scale PI simulations using ab initio PESs evaluated on-the-fly without the drawbacks of current methods.

  9. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    PubMed

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  11. Partitioning into hazard subregions for regional peaks-over-threshold modeling of heavy precipitation

    NASA Astrophysics Data System (ADS)

    Carreau, J.; Naveau, P.; Neppel, L.

    2017-05-01

    The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).

  12. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  13. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  14. EOS Interpolation and Thermodynamic Consistency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammel, J. Tinka

    2015-11-16

    As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.

  15. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    PubMed

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  16. A second-order accurate kinetic-theory-based method for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, Suresh M.

    1986-01-01

    An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.

  17. LLE Review 98 (January-March 2004)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goncharov, V.N.

    2004-08-10

    This volume of the LLE Review, covering January-March 2004, features ''Performance of 1-THz-Bandwidth, 2-D Smoothing by Spectral Dispersion and Polarization Smoothing of High-Power, Solid-State Laser Beams'', by S. P. Regan, J. A. Marozas, R. S. Craxton, J. H. Kelly, W. R. Donaldson, P. A. Jaanimagi, D. Jacobs-Perkins, R. L. Keck, T. J. Kessler, D. D. Meyerhofer, T. C. Sangster, W. Seka, V.A. Smalyuk, S. Skupsky, and J. D. Zuegel (p. 49). Laser-beam smoothing achieved with 1-THz-bandwidth, two-dimensional smoothing by spectral dispersion and polarization smoothing on the 60-beam, 30-kJ, 351-nm OMEGA laser system is reported. These beam-smoothing techniques are directly applicablemore » to direct-drive ignition target designs for the 192-beam, 1.8-MJ, 351-nm National Ignition Facility. Equivalent-target-plane images for constant-intensity laser pulses of varying duration were used to determine the smoothing. The properties of the phase plates, frequency modulators, and birefringent wedges were simulated and found to be in good agreement with the measurements.« less

  18. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  19. Interpolation of Reconnaissance Multibeam and Single-Beam Bathymetry Offshore of Milford, Connecticut

    USGS Publications Warehouse

    Poppe, L.J.; Ackerman, S.D.; McMullen, K.Y.; Schattgen, P.T.; Schaer, J.D.; Doran, E.F.

    2008-01-01

    This report releases echosounder data from the northern part of the National Oceanic and Atmospheric Administration (NOAA) hydrographic survey H11044 in Long Island Sound, off Milford, Connecticut. The data have been interpolated and regridded into a complete-coverage data set and image of the sea floor. The grid produced as a result of the interpolation is at 10-m resolution. These data extend an already published set of reprocessed bathymetric data from the southern part of survey H11044. In Long Island Sound, the U.S. Geological Survey, in cooperation with NOAA and the Connecticut Department of Environmental Protection, is producing detailed maps of the sea floor. Part of the current phase of research involves studies of sea-floor topography and its effect on the distributions of sedimentary environments and benthic habitats. This data set provides a more continuous perspective of the sea floor than was previously available. It helps to define topographic variability and benthic-habitat diversity for the area and improves our understanding of oceanographic processes controlling the distribution of sediments and benthic habitats. Inasmuch as precise information on environmental setting is important for selecting sampling sites and accurately interpreting point measurements, this data set can also serve as a base map for subsequent sedimentological, geochemical, and biological research.

  20. Research of beam smoothing technologies using CPP, SSD, and PS

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Su, Jingqin; Hu, Dongxia; Li, Ping; Yuan, Haoyu; Zhou, Wei; Yuan, Qiang; Wang, Yuancheng; Tian, Xiaocheng; Xu, Dangpeng; Dong, Jun; Zhu, Qihua

    2015-02-01

    Precise physical experiments place strict requirements on target illumination uniformity in Inertial Confinement Fusion. To obtain a smoother focal spot and suppress transverse SBS in large aperture optics, Multi-FM smoothing by spectral dispersion (SSD) was studied combined with continuous phase plate (CPP) and polarization smoothing (PS). New ways of PS are being developed to improve the laser irradiation uniformity and solve LPI problems in indirect-drive laser fusion. The near field and far field properties of beams using polarization smoothing were studied and compared, including birefringent wedge and polarization control array. As more parameters can be manipulated in a combined beam smoothing scheme, quad beam smoothing was also studies. Simulation results indicate through adjusting dispersion directions of one-dimensional (1-D) SSD beams in a quad, two-dimensional SSD can be obtained. Experiments have been done on SG-III laser facility using CPP and Multi-FM SSD. The research provides some theoretical and experimental basis for the application of CPP, SSD and PS on high-power laser facilities.

  1. A Novel Method for Differentiation of Human Mesenchymal Stem Cells into Smooth Muscle-Like Cells on Clinically Deliverable Thermally Induced Phase Separation Microspheres

    PubMed Central

    Parmar, Nina; Ahmadi, Raheleh

    2015-01-01

    Muscle degeneration is a prevalent disease, particularly in aging societies where it has a huge impact on quality of life and incurs colossal health costs. Suitable donor sources of smooth muscle cells are limited and minimally invasive therapeutic approaches are sought that will augment muscle volume by delivering cells to damaged or degenerated areas of muscle. For the first time, we report the use of highly porous microcarriers produced using thermally induced phase separation (TIPS) to expand and differentiate adipose-derived mesenchymal stem cells (AdMSCs) into smooth muscle-like cells in a format that requires minimal manipulation before clinical delivery. AdMSCs readily attached to the surface of TIPS microcarriers and proliferated while maintained in suspension culture for 12 days. Switching the incubation medium to a differentiation medium containing 2 ng/mL transforming growth factor beta-1 resulted in a significant increase in both the mRNA and protein expression of cell contractile apparatus components caldesmon, calponin, and myosin heavy chains, indicative of a smooth muscle cell-like phenotype. Growth of smooth muscle cells on the surface of the microcarriers caused no change to the integrity of the polymer microspheres making them suitable for a cell-delivery vehicle. Our results indicate that TIPS microspheres provide an ideal substrate for the expansion and differentiation of AdMSCs into smooth muscle-like cells as well as a microcarrier delivery vehicle for the attached cells ready for therapeutic applications. PMID:25205072

  2. The effect of food bolus location on jaw movement smoothness and masticatory efficiency.

    PubMed

    Molenaar, W N B; Gezelle Meerburg, P J; Luraschi, J; Whittle, T; Schimmel, M; Lobbezoo, F; Peck, C C; Murray, G M; Minami, I

    2012-09-01

    Masticatory efficiency in individuals with extensive tooth loss has been widely discussed. However, little is known about jaw movement smoothness during chewing and the effect of differences in food bolus location on movement smoothness and masticatory efficiency. The aim of this study was to determine whether experimental differences in food bolus location (anterior versus posterior) had an effect on masticatory efficiency and jaw movement smoothness. Jaw movement smoothness was evaluated by measuring jerk-cost (calculated from acceleration) with an accelerometer that was attached to the skin of the mentum of 10 asymptomatic subjects, and acceleration was recorded during chewing on two-colour chewing gum, which was used to assessed masticatory efficiency. Chewing was performed under two conditions: posterior chewing (chewing on molars and premolars only) and anterior chewing (chewing on canine and first premolar teeth only). Jerk-cost and masticatory efficiency (calculated as the ratio of unmixed azure colour to the total area of gum, the unmixed fraction) were compared between anterior and posterior chewing with the Wilcoxon signed rank test (two-tailed). Subjects chewed significantly less efficiently during anterior chewing than during posterior chewing (P = 0·0051). There was no significant difference in jerk-cost between anterior and posterior conditions in the opening phase (P = 0·25), or closing phase (P = 0·42). This is the first characterisation of the effect of food bolus location on jaw movement smoothness at the same time as recording masticatory efficiency. The data suggest that anterior chewing decreases masticatory efficiency, but does not influence jerk-cost. © 2012 Blackwell Publishing Ltd.

  3. Force maintenance and myosin filament assembly regulated by Rho-kinase in airway smooth muscle.

    PubMed

    Lan, Bo; Deng, Linhong; Donovan, Graham M; Chin, Leslie Y M; Syyong, Harley T; Wang, Lu; Zhang, Jenny; Pascoe, Christopher D; Norris, Brandon A; Liu, Jeffrey C-Y; Swyngedouw, Nicholas E; Banaem, Saleha M; Paré, Peter D; Seow, Chun Y

    2015-01-01

    Smooth muscle contraction can be divided into two phases: the initial contraction determines the amount of developed force and the second phase determines how well the force is maintained. The initial phase is primarily due to activation of actomyosin interaction and is relatively well understood, whereas the second phase remains poorly understood. Force maintenance in the sustained phase can be disrupted by strains applied to the muscle; the strain causes actomyosin cross-bridges to detach and also the cytoskeletal structure to disassemble in a process known as fluidization, for which the underlying mechanism is largely unknown. In the present study we investigated the ability of airway smooth muscle to maintain force after the initial phase of contraction. Specifically, we examined the roles of Rho-kinase and protein kinase C (PKC) in force maintenance. We found that for the same degree of initial force inhibition, Rho-kinase substantially reduced the muscle's ability to sustain force under static conditions, whereas inhibition of PKC had a minimal effect on sustaining force. Under oscillatory strain, Rho-kinase inhibition caused further decline in force, but again, PKC inhibition had a minimal effect. We also found that Rho-kinase inhibition led to a decrease in the myosin filament mass in the muscle cells, suggesting that one of the functions of Rho-kinase is to stabilize myosin filaments. The results also suggest that dissolution of myosin filaments may be one of the mechanisms underlying the phenomenon of fluidization. These findings can shed light on the mechanism underlying deep inspiration induced bronchodilation. Copyright © 2015 the American Physiological Society.

  4. Force maintenance and myosin filament assembly regulated by Rho-kinase in airway smooth muscle

    PubMed Central

    Lan, Bo; Deng, Linhong; Donovan, Graham M.; Chin, Leslie Y. M.; Syyong, Harley T.; Wang, Lu; Zhang, Jenny; Pascoe, Christopher D.; Norris, Brandon A.; Liu, Jeffrey C.-Y.; Swyngedouw, Nicholas E.; Banaem, Saleha M.; Paré, Peter D.

    2014-01-01

    Smooth muscle contraction can be divided into two phases: the initial contraction determines the amount of developed force and the second phase determines how well the force is maintained. The initial phase is primarily due to activation of actomyosin interaction and is relatively well understood, whereas the second phase remains poorly understood. Force maintenance in the sustained phase can be disrupted by strains applied to the muscle; the strain causes actomyosin cross-bridges to detach and also the cytoskeletal structure to disassemble in a process known as fluidization, for which the underlying mechanism is largely unknown. In the present study we investigated the ability of airway smooth muscle to maintain force after the initial phase of contraction. Specifically, we examined the roles of Rho-kinase and protein kinase C (PKC) in force maintenance. We found that for the same degree of initial force inhibition, Rho-kinase substantially reduced the muscle's ability to sustain force under static conditions, whereas inhibition of PKC had a minimal effect on sustaining force. Under oscillatory strain, Rho-kinase inhibition caused further decline in force, but again, PKC inhibition had a minimal effect. We also found that Rho-kinase inhibition led to a decrease in the myosin filament mass in the muscle cells, suggesting that one of the functions of Rho-kinase is to stabilize myosin filaments. The results also suggest that dissolution of myosin filaments may be one of the mechanisms underlying the phenomenon of fluidization. These findings can shed light on the mechanism underlying deep inspiration induced bronchodilation. PMID:25305246

  5. Adaptive estimation of a time-varying phase with coherent states: Smoothing can give an unbounded improvement over filtering

    NASA Astrophysics Data System (ADS)

    Laverick, Kiarn T.; Wiseman, Howard M.; Dinani, Hossein T.; Berry, Dominic W.

    2018-04-01

    The problem of measuring a time-varying phase, even when the statistics of the variation is known, is considerably harder than that of measuring a constant phase. In particular, the usual bounds on accuracy, such as the 1 /(4 n ¯) standard quantum limit with coherent states, do not apply. Here, by restricting to coherent states, we are able to analytically obtain the achievable accuracy, the equivalent of the standard quantum limit, for a wide class of phase variation. In particular, we consider the case where the phase has Gaussian statistics and a power-law spectrum equal to κp -1/|ω| p for large ω , for some p >1 . For coherent states with mean photon flux N , we give the quantum Cramér-Rao bound on the mean-square phase error as [psin(π /p ) ] -1(4N /κ ) -(p -1 )/p . Next, we consider whether the bound can be achieved by an adaptive homodyne measurement in the limit N /κ ≫1 , which allows the photocurrent to be linearized. Applying the optimal filtering for the resultant linear Gaussian system, we find the same scaling with N , but with a prefactor larger by a factor of p . By contrast, if we employ optimal smoothing we can exactly obtain the quantum Cramér-Rao bound. That is, contrary to previously considered (p =2 ) cases of phase estimation, here the improvement offered by smoothing over filtering is not limited to a factor of 2 but rather can be unbounded by a factor of p . We also study numerically the performance of these estimators for an adaptive measurement in the limit where N /κ is not large and find a more complicated picture.

  6. Tidal estimation in the Atlantic and Indian Oceans, 3 deg x 3 deg solution

    NASA Technical Reports Server (NTRS)

    Sanchez, Braulio V.; Rao, Desiraju B.; Steenrod, Stephen D.

    1987-01-01

    An estimation technique was developed to extrapolate tidal amplitudes and phases over entire ocean basins using existing gauge data and the altimetric measurements provided by satellite oceanography. The technique was previously tested. Some results obtained by using a 3 deg by 3 deg grid are presented. The functions used in the interpolation are the eigenfunctions of the velocity (Proudman functions) which are computed numerically from a knowledge of the basin's bottom topography, the horizontal plan form and the necessary boundary conditions. These functions are characteristic of the particular basin. The gravitational normal modes of the basin are computed as part of the investigation; they are used to obtain the theoretical forced solutions for the tidal constituents. The latter can provide the simulated data for the testing of the method and serve as a guide in choosing the most energetic functions for the interpolation.

  7. Solubility relations in the system sodium chloride-ferrous chloride-water between 25 and 70.degree.C at 1 atm

    USGS Publications Warehouse

    Chou, I.-Ming; Phan, L.D.

    1985-01-01

    Solubility relations in the ternary system NaCl-FeCl2-H2O have been determined by the visual polythermal method at 1 atm from 20 to 85??C along six composition lines. These she composition lines are defined by mixing FeCl2??4H2O with six aqueous NaCl solutions containing 5, 10, 11, 15, 20, and 25 wt % of NaCl, respectively. The solid phases encountered in these experiments were NaCl and FeCl2??4H2O. The maximum uncertainties in these measurements are ??0.02 wt % NaCl and ??0.15??C. The data along each composition line were regressed to a smooth curve when only one solid phase was stable. When two solids were stable along a composition line, the data were regressed to two smooth curves, the intersection of which indicated the point where the two solids coexisted. The maximum deviation of the measured solubilities from the smoothed curves is 0.14 wt % FeCl2. Isothermal solubilities of halite and FeCl2??4H2O were calculated from these smoothed curves at 25, 50, and 70 ??C.

  8. Persistent gut motor dysfunction in a murine model of T-cell-induced enteropathy.

    PubMed

    Mizutani, T; Akiho, H; Khan, W I; Murao, H; Ogino, H; Kanayama, K; Nakamura, K; Takayanagi, R

    2010-02-01

    Inflammatory bowel disease (IBD) patients in remission often experience irritable bowel syndrome (IBS)-like symptoms. We investigated the mechanism for intestinal muscle hypercontractility seen in T-cell-induced enteropathy in recovery phase. BALB/c mice were treated with an anti-CD3 antibody (100 microg per mouse) and euthanized at varying days post-treatment to investigate the histological changes, longitudinal smooth muscle cell contraction, cytokines (Th1, Th2 cytokines, TNF-alpha) and serotonin (5-HT)-expressing enterochromaffin cell numbers in the small intestine. The role of 5-HT in anti-CD3 antibody-induced intestinal muscle function in recovery phase was assessed by inhibiting 5-HT synthesis using 4-chloro-DL-phenylalanine (PCPA). Small intestinal tissue damage was observed from 24 h after the anti-CD3 antibody injection, but had resolved by day 5. Carbachol-induced smooth muscle cell contractility was significantly increased from 4 h after injection, and this muscle hypercontractility was evident in recovery phase (at day 7). Th2 cytokines (IL-4, IL-13) were significantly increased from 4 h to day 7. 5-HT-expressing cells in the intestine were increased from day 1 to day 7. The 5-HT synthesis inhibitor PCPA decreased the anti-CD3 antibody-induced muscle hypercontractility in recovery phase. Intestinal muscle hypercontractility in remission is maintained at the smooth muscle cell level. Th2 cytokines and 5-HT in the small intestine contribute to the maintenance of the altered muscle function in recovery phase.

  9. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  10. The Thermodynamic Limit in Mean Field Spin Glass Models

    NASA Astrophysics Data System (ADS)

    Guerra, Francesco; Toninelli, Fabio Lucio

    We present a simple strategy in order to show the existence and uniqueness of the infinite volume limit of thermodynamic quantities, for a large class of mean field disordered models, as for example the Sherrington-Kirkpatrick model, and the Derrida p-spin model. The main argument is based on a smooth interpolation between a large system, made of N spin sites, and two similar but independent subsystems, made of N1 and N2 sites, respectively, with N1+N2=N. The quenched average of the free energy turns out to be subadditive with respect to the size of the system. This gives immediately convergence of the free energy per site, in the infinite volume limit. Moreover, a simple argument, based on concentration of measure, gives the almost sure convergence, with respect to the external noise. Similar results hold also for the ground state energy per site.

  11. Topological resolution of gauge theory singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saracco, Fabio; Tomasiello, Alessandro; Torroba, Gonzalo

    2013-08-21

    Some gauge theories with Coulomb branches exhibit singularities in perturbation theory, which are usually resolved by nonperturbative physics. In string theory this corresponds to the resolution of timelike singularities near the core of orientifold planes by effects from F or M theory. We propose a new mechanism for resolving Coulomb branch singularities in three-dimensional gauge theories, based on Chern-Simons interactions. This is illustrated in a supersymmetric S U ( 2 ) Yang-Mills-Chern-Simons theory. We calculate the one-loop corrections to the Coulomb branch of this theory and find a result that interpolates smoothly between the high-energy metric (that would exhibit themore » singularity) and a regular singularity-free low-energy result. We suggest possible applications to singularity resolution in string theory and speculate a relationship to a similar phenomenon for the orientifold six-plane in massive IIA supergravity.« less

  12. Evaluation of Mars Entry Reconstructured Trajectories Based on Hypothetical 'Quick-Look' Entry Navigation Data

    NASA Technical Reports Server (NTRS)

    Pastor, P. Rick; Bishop, Robert H.; Striepe, Scott A.

    2000-01-01

    A first order simulation analysis of the navigation accuracy expected from various Navigation Quick-Look data sets is performed. Here quick-look navigation data are observations obtained by hypothetical telemetried data transmitted on the fly during a Mars probe's atmospheric entry. In this simulation study, navigation data consists of 3-axis accelerometer sensor and attitude information data. Three entry vehicle guidance types are studied: I. a Maneuvering entry vehicle (as with Mars 01 guidance where angle of attack and bank angle are controlled); II. Zero angle-of-attack controlled entry vehicle (as with Mars 98); and III. Ballistic, or spin stabilized entry vehicle (as with Mars Pathfinder);. For each type, sensitivity to progressively under sampled navigation data and inclusion of sensor errors are characterized. Attempts to mitigate the reconstructed trajectory errors, including smoothing, interpolation and changing integrator characteristics are also studied.

  13. “Kerrr” black hole: The lord of the string

    NASA Astrophysics Data System (ADS)

    Smailagic, Anais; Spallucci, Euro

    2010-04-01

    Kerrr in the title is not a typo. The third “r” stands for regular, in the sense of pathology-free rotating black hole. We exhibit a long search-for, exact, Kerr-like, solution of the Einstein equations with novel features: (i) no curvature ring singularity; (ii) no “anti-gravity” universe with causality violating time-like closed world-lines; (iii) no “super-luminal” matter disk. The ring singularity is replaced by a classical, circular, rotating string with Planck tension representing the inner engine driving the rotation of all the surrounding matter. The resulting geometry is regular and smoothly interpolates among inner Minkowski space, borderline de Sitter and outer Kerr universe. The key ingredient to cure all unphysical features of the ordinary Kerr black hole is the choice of a “non-commutative geometry inspired” matter source as the input for the Einstein equations, in analogy with spherically symmetric black holes described in earlier works.

  14. Video Animation of Ocean Topography From TOPEX/POSEIDON

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Leconte, Denis; Pihos, Greg; Davidson, Roger; Kruizinga, Gerhard; Tapley, Byron

    1993-01-01

    Three video loops showing various aspects of the dynamic ocean topography obtained from the TOPEX/POSEIDON radar altimetry data will be presented. The first shows the temporal change of the global ocean topography during the first year of the mission. The time-averaged mean is removed to reveal the temporal variabilities. Temporal interpolation is performed to create daily maps for the animation. A spatial smoothing is also performed to retain only the large-sale features. Gyre-scale seasonal changes are the main features. The second shows the temporal evolution of the Gulf Stream. The high resolution gravimetric geoid of Rapp is used to obtain the absolute ocean topography. Simulated drifters are used to visualize the flow pattern of the current. Meanders and rings of the current are the main features. The third is an animation of the global ocean topography on a spherical earth. The JGM-2 geoid is used to obtain the ocean topography...

  15. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  16. A Tracking Analyst for large 3D spatiotemporal data from multiple sources (case study: Tracking volcanic eruptions in the atmosphere)

    NASA Astrophysics Data System (ADS)

    Gad, Mohamed A.; Elshehaly, Mai H.; Gračanin, Denis; Elmongui, Hicham G.

    2018-02-01

    This research presents a novel Trajectory-based Tracking Analyst (TTA) that can track and link spatiotemporally variable data from multiple sources. The proposed technique uses trajectory information to determine the positions of time-enabled and spatially variable scatter data at any given time through a combination of along trajectory adjustment and spatial interpolation. The TTA is applied in this research to track large spatiotemporal data of volcanic eruptions (acquired using multi-sensors) in the unsteady flow field of the atmosphere. The TTA enables tracking injections into the atmospheric flow field, the reconstruction of the spatiotemporally variable data at any desired time, and the spatiotemporal join of attribute data from multiple sources. In addition, we were able to create a smooth animation of the volcanic ash plume at interactive rates. The initial results indicate that the TTA can be applied to a wide range of multiple-source data.

  17. Controlling Surface Morphology and Circumventing Secondary Phase Formation in Non-polar m-GaN by Tuning Nitrogen Activity

    NASA Astrophysics Data System (ADS)

    Chang, C. W.; Wadekar, P. V.; Guo, S. S.; Cheng, Y. J.; Chou, M.; Huang, H. C.; Hsieh, W. C.; Lai, W. C.; Chen, Q. Y.; Tu, L. W.

    2018-01-01

    For the development of non-polar nitrides based optoelectronic devices, high-quality films with smooth surfaces, free of defects or clusters, are critical. In this work, the mechanisms governing the topography and single phase epitaxy of non-polar m-plane gallium nitride ( m-GaN) thin films are studied. The samples were grown using plasma-assisted molecular beam epitaxy on m-plane sapphire substrates. Growth of pure m-GaN thin films, concomitant with smooth surfaces is possible at low radio frequency powers and high growth temperatures as judged by the high resolution x-ray diffraction, field emission scanning electron microscopy, and atomic force microscopy measurements. Defect types and densities are quantified using transmission electron microscopy, while Raman spectroscopy was used to analyze the in-plane stress in the thin films which matches the lattice mismatch analysis. Energy dispersive spectroscopy and cathodoluminescence support a congruent growth and a dominant near band edge emission. From the analysis, a narrow growth window is discovered wherein epitaxial growth of pure m-plane GaN samples free of secondary phases with narrow rocking curves and considerable smooth surfaces are successfully demonstrated.

  18. Printability and inspectability of programmed pit defects on teh masks in EUV lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, I.-Y.; Seo, H.-S.; Ahn, B.-S.

    2010-03-12

    Printability and inspectability of phase defects in ELlVL mask originated from substrate pit were investigated. For this purpose, PDMs with programmed pits on substrate were fabricated using different ML sources from several suppliers. Simulations with 32-nm HP L/S show that substrate pits with below {approx}20 nm in depth would not be printed on the wafer if they could be smoothed by ML process down to {approx}1 nm in depth on ML surface. Through the investigation of inspectability for programmed pits, minimum pit sizes detected by KLA6xx, AIT, and M7360 depend on ML smoothing performance. Furthermore, printability results for pit defectsmore » also correlate with smoothed pit sizes. AIT results for pattemed mask with 32-nm HP L/S represents that minimum printable size of pits could be {approx}28.3 nm of SEVD. In addition, printability of pits became more printable as defocus moves to (-) directions. Consequently, printability of phase defects strongly depends on their locations with respect to those of absorber patterns. This indicates that defect compensation by pattern shift could be a key technique to realize zero printable phase defects in EUVL masks.« less

  19. Interpolate with DIVA and view the products in OceanBrowser : what's up ?

    NASA Astrophysics Data System (ADS)

    Watelet, Sylvain; Barth, Alexander; Beckers, Jean-Marie; Troupin, Charles

    2017-04-01

    The Data-Interpolating Variational Analysis (DIVA) software is a statistical tool designed to reconstruct a continuous field from discrete measurements. This method is based on the numerical implementation of the Variational Inverse Model (VIM), which consists of a minimization of a cost function, allowing the choice of the analyzed field fitting at best the data sets without presenting unrealistic strong variations. The problem is solved efficiently using a finite-element method. This method, equivalent to the Optimal Interpolation, is particularly suited to deal with irregularly-spaced observations and produces outputs on a regular grid (2D, 3D or 4D). The results are stored in NetCDF files, the most widespread format in the earth sciences community. OceanBrowser is a web-service that allows one to visualize gridded fields on-line. Within the SeaDataNet and EMODNET (Chemical lot) projects, several national ocean data centers have created gridded climatologies of different ocean properties using the data analysis software DIVA. In order to give a common viewing service to those interpolated products, the GHER has developed OceanBrowser which is based on open standards from the Open Geospatial Consortium (OGC), in particular Web Map Service (WMS) and Web Feature Service (WFS). These standards define a protocol for describing, requesting and querying two-dimensional maps at a given depth and time. DIVA and OceanBrowser are both softwares tools which are continuously upgraded and distributed for free through frequent version releases. The development is funded by the EMODnet and SeaDataNet projects and include many discussions and feedback from the users community. Here, we present two recent major upgrades. First, we have implemented a "customization" of DIVA analyses following the sea bottom, using the bottom depth gradient as a new source of information. The weaker the slope of the bottom ocean, the higher the correlation length. This correlation length being associated with the propagation of the information, it is therefore harder to interpolate through bottom topographic "barriers" such as the continental slope and easier to do it in the perpendicular direction. Although realistic for most applications, this behaviour can always be disabled by the user. Second, we have added some combined products in OceanBrowser, covering all European seas at once. Based on the analyses performed by the other EMODnet partners using DIVA on five zones (Atlantic, North Sea, Baltic Sea, Black Sea, Mediterranean Sea), we have computed a single European product for five variables : ammonium, chlorophyll-a, dissolved oxygen concentration, phosphate and silicate. At boundaries, a smooth filter was used to remove possible discrepancies between regional analyses. Our European combined product is available for all seasons and several depths. This is the first step towards the use of a common reference field for all European seas when running DIVA.

  20. Creating a monthly time series of the potentiometric surface in the Upper Floridan aquifer, Northern Tampa Bay area, Florida, January 2000-December 2009

    USGS Publications Warehouse

    Lee, Terrie M.; Fouad, Geoffrey G.

    2014-01-01

    In Florida’s karst terrain, where groundwater and surface waters interact, a mapping time series of the potentiometric surface in the Upper Floridan aquifer offers a versatile metric for assessing the hydrologic condition of both the aquifer and overlying streams and wetlands. Long-term groundwater monitoring data were used to generate a monthly time series of potentiometric surfaces in the Upper Floridan aquifer over a 573-square-mile area of west-central Florida between January 2000 and December 2009. Recorded groundwater elevations were collated for 260 groundwater monitoring wells in the Northern Tampa Bay area, and a continuous time series of daily observations was created for 197 of the wells by estimating missing daily values through regression relations with other monitoring wells. Kriging was used to interpolate the monthly average potentiometric-surface elevation in the Upper Floridan aquifer over a decade. The mapping time series gives spatial and temporal coherence to groundwater monitoring data collected continuously over the decade by three different organizations, but at various frequencies. Further, the mapping time series describes the potentiometric surface beneath parts of six regionally important stream watersheds and 11 municipal well fields that collectively withdraw about 90 million gallons per day from the Upper Floridan aquifer. Monthly semivariogram models were developed using monthly average groundwater levels at wells. Kriging was used to interpolate the monthly average potentiometric-surface elevations and to quantify the uncertainty in the interpolated elevations. Drawdown of the potentiometric surface within well fields was likely the cause of a characteristic decrease and then increase in the observed semivariance with increasing lag distance. This characteristic made use of the hole effect model appropriate for describing the monthly semivariograms and the interpolated surfaces. Spatial variance reflected in the monthly semivariograms decreased markedly between 2002 and 2003, timing that coincided with decreases in well-field pumping. Cross-validation results suggest that the kriging interpolation may smooth over the drawdown of the potentiometric surface near production wells. The groundwater monitoring network of 197 wells yielded an average kriging error in the potentiometric-surface elevations of 2 feet or less over approximately 70 percent of the map area. Additional data collection within the existing monitoring network of 260 wells and near selected well fields could reduce the error in individual months. Reducing the kriging error in other areas would require adding new monitoring wells. Potentiometric-surface elevations fluctuated by as much as 30 feet over the study period, and the spatially averaged elevation for the entire surface rose by about 2 feet over the decade. Monthly potentiometric-surface elevations describe the lateral groundwater flow patterns in the aquifer and are usable at a variety of spatial scales to describe vertical groundwater recharge and discharge conditions for overlying surface-water features.

  1. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  2. Ab initio potential-energy surfaces for complex, multichannel systems using modified novelty sampling and feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.

    2005-02-01

    A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.

  3. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  4. Nearest neighbor, bilinear interpolation and bicubic interpolation geographic correction effects on LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.

  5. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  6. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  7. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  9. Weighted Non-linear Compact Schemes for the Direct Numerical Simulation of Compressible, Turbulent Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Baeder, James D.

    2014-01-21

    A new class of compact-reconstruction weighted essentially non-oscillatory (CRWENO) schemes were introduced (Ghosh and Baeder in SIAM J Sci Comput 34(3): A1678–A1706, 2012) with high spectral resolution and essentially non-oscillatory behavior across discontinuities. The CRWENO schemes use solution-dependent weights to combine lower-order compact interpolation schemes and yield a high-order compact scheme for smooth solutions and a non-oscillatory compact scheme near discontinuities. The new schemes result in lower absolute errors, and improved resolution of discontinuities and smaller length scales, compared to the weighted essentially non-oscillatory (WENO) scheme of the same order of convergence. Several improvements to the smoothness-dependent weights, proposed inmore » the literature in the context of the WENO schemes, address the drawbacks of the original formulation. This paper explores these improvements in the context of the CRWENO schemes and compares the different formulations of the non-linear weights for flow problems with small length scales as well as discontinuities. Simplified one- and two-dimensional inviscid flow problems are solved to demonstrate the numerical properties of the CRWENO schemes and its different formulations. Canonical turbulent flow problems—the decay of isotropic turbulence and the shock-turbulence interaction—are solved to assess the performance of the schemes for the direct numerical simulation of compressible, turbulent flows« less

  10. Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility

    PubMed Central

    Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.

    2009-01-01

    We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456

  11. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  12. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  13. Phase gradient imaging for positive contrast generation to superparamagnetic iron oxide nanoparticle-labeled targets in magnetic resonance imaging.

    PubMed

    Zhu, Haitao; Demachi, Kazuyuki; Sekino, Masaki

    2011-09-01

    Positive contrast imaging methods produce enhanced signal at large magnetic field gradient in magnetic resonance imaging. Several postprocessing algorithms, such as susceptibility gradient mapping and phase gradient mapping methods, have been applied for positive contrast generation to detect the cells targeted by superparamagnetic iron oxide nanoparticles. In the phase gradient mapping methods, smoothness condition has to be satisfied to keep the phase gradient unwrapped. Moreover, there has been no discussion about the truncation artifact associated with the algorithm of differentiation that is performed in k-space by the multiplication with frequency value. In this work, phase gradient methods are discussed by considering the wrapping problem when the smoothness condition is not satisfied. A region-growing unwrapping algorithm is used in the phase gradient image to solve the problem. In order to reduce the truncation artifact, a cosine function is multiplied in the k-space to eliminate the abrupt change at the boundaries. Simulation, phantom and in vivo experimental results demonstrate that the modified phase gradient mapping methods may produce improved positive contrast effects by reducing truncation or wrapping artifacts. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. A novel approach to controlling the phase angle of a variable switched reluctance motor for electric vehicle propulsion using the statistic matrix norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holling, G.H.

    1994-12-31

    Variable switched reluctance (VSR) motors are gaining importance for industrial applications. The paper will introduce a novel approach to simplify the computation involved in the control of VSR motors. Results are shown, that validate the approach and demonstrates the superior performance compared to tabulated control parameters with linear interpolation, which are widely used in implementations.

  15. An analytical method for computing voxel S values for electrons and photons.

    PubMed

    Amato, Ernesto; Minutoli, Fabio; Pacilio, Massimiliano; Campenni, Alfredo; Baldari, Sergio

    2012-11-01

    The use of voxel S values (VSVs) is perhaps the most common approach to radiation dosimetry for nonuniform distributions of activity within organs or tumors. However, VSVs are currently available only for a limited number of voxel sizes and radionuclides. The objective of this study was to develop a general method to evaluate them for any spectrum of electrons and photons in any cubic voxel dimension of practical interest for clinical dosimetry in targeted radionuclide therapy. The authors developed a Monte Carlo simulation in Geant4 in order to evaluate the energy deposited per disintegration (E(dep)) in a voxelized region of soft tissue from monoenergetic electrons (10-2000 keV) or photons (10-1000 keV) homogeneously distributed in the central voxel, considering voxel dimensions ranging from 3 mm to 10 mm. E(dep) was represented as a function of a dimensionless quantity termed the "normalized radius," R(n) = R∕l, where l is the voxel size and R is the distance from the origin. The authors introduced two parametric functions in order to fit the electron and photon results, and they interpolated the parameters to derive VSVs for any energy and voxel side within the ranges mentioned above. In order to validate the results, the authors determined VSV for two radionuclides ((131)I and (89)Sr) and two voxel dimensions and they compared them with reference data. A validation study in a simple sphere model, accounting for tissue inhomogeneities, is presented. The E(dep)(R(n)) for both monoenergetic electrons and photons exhibit a smooth variation with energy and voxel size, implying that VSVs for monoenergetic electrons or photons may be derived by interpolation over the range of energies and dimensions considered. By integration, S values for continuous emission spectra from β(-) decay may be derived as well. The approach allows the determination of VSVs for monoenergetic (Auger or conversion) electrons and (x-ray or gamma-ray) photons by means of two functions whose parameters can be interpolated from tabular data provided. Through integration, it is possible to generalize the method to any continuous (beta) spectrum, allowing to calculate VSVs for any electron and photon emitter in a voxelized structure.

  16. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  17. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  18. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  19. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  20. Ultracold Realization of AntiFerromagenteic Order

    NASA Astrophysics Data System (ADS)

    Shrestha, Uttam

    2011-03-01

    We investigate numerically the experimental feasibility of observing the antiferromagnetic (AF) order in the bosonic mixtures of rubidium (87 Rb) and potassium (41 K) in a two-dimensional optical lattice with external trapping potential. Within the mean-field approximation we have found the ground states which, for a specific range of parameters such as inter-species interactions and lattice height, interpolate from phase separation to the AF order. For the moderate lattice heights the coexistence of the Mott and AF phase is possible for rubidium atoms while the potassium atoms remain superfluid with overlapped AF phase. In our view there has not been any study on AF order in two-component systems when one component remains in the superfluid phase while the other is in the Mott phase. Therefore, this observation may provide a novel regime for studying quantum magnetism in ultracold systems. This work was supported by the EU Contract EU STREP NAMEQUAM.

  1. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  2. Biphasic force response to iso-velocity stretch in airway smooth muscle.

    PubMed

    Norris, Brandon A; Lan, Bo; Wang, Lu; Pascoe, Christopher D; Swyngedouw, Nicholas E; Paré, Peter D; Seow, Chun Y

    2015-10-01

    Airway smooth muscle (ASM) in vivo is constantly subjected to oscillatory strain due to tidal breathing and deep inspirations. ASM contractility is known to be adversely affected by strains, especially those of large amplitudes. Based on the cross-bridge model of contraction, it is likely that strain impairs force generation by disrupting actomyosin cross-bridge interaction. There is also evidence that strain modulates muscle stiffness and force through induction of cytoskeletal remodeling. However, the molecular mechanism by which strain alters smooth muscle function is not entirely clear. Here, we examine the response of ASM to iso-velocity stretches to probe the components within the muscle preparation that give rise to different features in the force response. We found in ASM that force response to a ramp stretch showed a biphasic feature, with the initial phase associated with greater muscle stiffness compared with that in the later phase, and that the transition between the phases occurred at a critical strain of ∼3.3%. Only strains with amplitudes greater than the critical strain could lead to reduction in force and stiffness of the muscle in the subsequent stretches. The initial-phase stiffness was found to be linearly related to the degree of muscle activation, suggesting that the stiffness stems mainly from attached cross bridges. Both phases were affected by the degree of muscle activation and by inhibitors of myosin light-chain kinase, PKC, and Rho-kinase. Different responses due to different interventions suggest that cross-bridge and cytoskeletal stiffness is regulated differently by the kinases. Copyright © 2015 the American Physiological Society.

  3. Efficient characterization of phase space mapping in axially symmetric optical systems

    NASA Astrophysics Data System (ADS)

    Barbero, Sergio; Portilla, Javier

    2018-01-01

    Phase space mapping, typically between an object and image plane, characterizes an optical system within a geometrical optics framework. We propose a novel conceptual frame to characterize the phase mapping in axially symmetric optical systems for arbitrary object locations, not restricted to a specific object plane. The idea is based on decomposing the phase mapping into a set of bivariate equations corresponding to different values of the radial coordinate on a specific object surface (most likely the entrance pupil). These equations are then approximated through bivariate Chebyshev interpolation at Chebyshev nodes, which guarantees uniform convergence. Additionally, we propose the use of a new concept (effective object phase space), defined as the set of points of the phase space at the first optical element (typically the entrance pupil) that are effectively mapped onto the image surface. The effective object phase space provides, by means of an inclusion test, a way to avoid tracing rays that do not reach the image surface.

  4. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  5. Carbachol-induced rabbit bladder smooth muscle contraction: roles of protein kinase C and Rho kinase.

    PubMed

    Wang, Tanchun; Kendig, Derek M; Smolock, Elaine M; Moreland, Robert S

    2009-12-01

    Smooth muscle contraction is regulated by phosphorylation of the myosin light chain (MLC) catalyzed by MLC kinase and dephosphorylation catalyzed by MLC phosphatase. Agonist stimulation of smooth muscle results in the inhibition of MLC phosphatase activity and a net increase in MLC phosphorylation and therefore force. The two pathways believed to be primarily important for inhibition of MLC phosphatase activity are protein kinase C (PKC)-catalyzed CPI-17 phosphorylation and Rho kinase (ROCK)-catalyzed myosin phosphatase-targeting subunit (MYPT1) phosphorylation. The goal of this study was to determine the roles of PKC and ROCK and their downstream effectors in regulating MLC phosphorylation levels and force during the phasic and sustained phases of carbachol-stimulated contraction in intact bladder smooth muscle. These studies were performed in the presence and absence of the PKC inhibitor bisindolylmaleimide-1 (Bis) or the ROCK inhibitor H-1152. Phosphorylation levels of Thr(38)-CPI-17 and Thr(696)/Thr(850)-MYPT1 were measured at different times during carbachol stimulation using site-specific antibodies. Thr(38)-CPI-17 phosphorylation increased concurrently with carbachol-stimulated force generation. This increase was reduced by inhibition of PKC during the entire contraction but was only reduced by ROCK inhibition during the sustained phase of contraction. MYPT1 showed high basal phosphorylation levels at both sites; however, only Thr(850) phosphorylation increased with carbachol stimulation; the increase was abolished by the inhibition of either ROCK or PKC. Our results suggest that during agonist stimulation, PKC regulates MLC phosphatase activity through phosphorylation of CPI-17. In contrast, ROCK phosphorylates both Thr(850)-MYPT1 and CPI-17, possibly through cross talk with a PKC pathway, but is only significant during the sustained phase of contraction. Last, our results demonstrate that there is a constitutively activate pool of ROCK that phosphorylates MYPT1 in the basal state, which may account for the high resting levels of MLC phosphorylation measured in rabbit bladder smooth muscle.

  6. Universal Scaling and Critical Exponents of the Anisotropic Quantum Rabi Model.

    PubMed

    Liu, Maoxin; Chesi, Stefano; Ying, Zu-Jian; Chen, Xiaosong; Luo, Hong-Gang; Lin, Hai-Qing

    2017-12-01

    We investigate the quantum phase transition of the anisotropic quantum Rabi model, in which the rotating and counterrotating terms are allowed to have different coupling strengths. The model interpolates between two known limits with distinct universal properties. Through a combination of analytic and numerical approaches, we extract the phase diagram, scaling functions, and critical exponents, which determine the universality class at finite anisotropy (identical to the isotropic limit). We also reveal other interesting features, including a superradiance-induced freezing of the effective mass and discontinuous scaling functions in the Jaynes-Cummings limit. Our findings are extended to the few-body quantum phase transitions with N>1 spins, where we expose the same effective parameters, scaling properties, and phase diagram. Thus, a stronger form of universality is established, valid from N=1 up to the thermodynamic limit.

  7. Universal Scaling and Critical Exponents of the Anisotropic Quantum Rabi Model

    NASA Astrophysics Data System (ADS)

    Liu, Maoxin; Chesi, Stefano; Ying, Zu-Jian; Chen, Xiaosong; Luo, Hong-Gang; Lin, Hai-Qing

    2017-12-01

    We investigate the quantum phase transition of the anisotropic quantum Rabi model, in which the rotating and counterrotating terms are allowed to have different coupling strengths. The model interpolates between two known limits with distinct universal properties. Through a combination of analytic and numerical approaches, we extract the phase diagram, scaling functions, and critical exponents, which determine the universality class at finite anisotropy (identical to the isotropic limit). We also reveal other interesting features, including a superradiance-induced freezing of the effective mass and discontinuous scaling functions in the Jaynes-Cummings limit. Our findings are extended to the few-body quantum phase transitions with N >1 spins, where we expose the same effective parameters, scaling properties, and phase diagram. Thus, a stronger form of universality is established, valid from N =1 up to the thermodynamic limit.

  8. Fourier Phase Domain Steganography: Phase Bin Encoding Via Interpolation

    NASA Astrophysics Data System (ADS)

    Rivas, Edward

    2007-04-01

    In recent years there has been an increased interest in audio steganography and watermarking. This is due primarily to two reasons. First, an acute need to improve our national security capabilities in light of terrorist and criminal activity has driven new ideas and experimentation. Secondly, the explosive proliferation of digital media has forced the music industry to rethink how they will protect their intellectual property. Various techniques have been implemented but the phase domain remains a fertile ground for improvement due to the relative robustness to many types of distortion and immunity to the Human Auditory System. A new method for embedding data in the phase domain of the Discrete Fourier Transform of an audio signal is proposed. Focus is given to robustness and low perceptibility, while maintaining a relatively high capacity rate of up to 172 bits/s.

  9. High Frequency Acoustic Reflection and Transmission in Ocean Sediments

    DTIC Science & Technology

    2005-09-30

    the magnitude and phase of the reflection coefficient from a smooth water/sand interface with elastic and poroelastic models ”, J. Acoust . Soc. Am...physical model of high-frequency acoustic interaction with the ocean floor, including penetration through and reflection from smooth and rough water...and additional laboratory measurements in the ARL:UT sand tank, an improved model of sediment acoustics will be developed that is consistent with

  10. Volitional control of anticipatory ocular pursuit responses under stabilised image conditions in humans.

    PubMed

    Barnes, G; Goodbody, S; Collins, S

    1995-01-01

    Ocular pursuit responses have been examined in humans in three experiments in which the pursuit target image has been fully or partially stabilised on the fovea by feeding a recorded eye movement signal back to drive the target motion. The objective was to establish whether subjects could volitionally control smooth eye movement to reproduce trajectories of target motion in the absence of a concurrent target motion stimulus. In experiment 1 subjects were presented with a target moving with a triangular waveform in the horizontal axis with a frequency of 0.325 Hz and velocities of +/- 10-50 degrees/s. The target was illuminated twice per cycle for pulse durations (PD) of 160-640 ms as it passed through the centre position; otherwise subjects were in darkness. Subjects initially tracked the target motion in a conventional closed-loop mode for four cycles. Prior to the next target presentation the target image was stabilised on the fovea, so that any target motion generated resulted solely from volitional eye movement. Subjects continued to make anticipatory smooth eye movements both to the left and the right with a velocity trajectory similar to that observed in the closed-loop phase. Peak velocity in the stabilised-image mode was highly correlated with that in the prior closed-loop phase, but was slightly less (84% on average). In experiment 2 subjects were presented with a continuously illuminated target that was oscillated sinusoidally at frequencies of 0.2-1.34 Hz and amplitudes of +/- 5-20 degrees. After four cycles of closed-loop stimulation the image was stabilised on the fovea at the time of peak target displacement. Subjects continued to generate an oscillatory smooth eye velocity pattern that mimicked the sinusoidal motion of the previous closed-loop phase for at least three further cycles. The peak eye velocity generated ranged from 57-95% of that in the closed-loop phase at frequencies up to 0.8 Hz but decreased significantly at 1.34 Hz. In experiment 3 subjects were presented with a stabilised display throughout and generated smooth eye movements with peak velocity up to 84 degrees/s in the complete absence of any prior external target motion stimulus, by transferring their attention alternately to left and right of the centre of the display. Eye velocity was found to be dependent on the eccentricity of the centre of attention and the frequency of alternation. When the target was partially stabilised on the retina by feeding back only a proportion (Kf = 0.6-0.9) of the eye movement signal to drive the target, subjects were still able to generate smooth movements at will, even though the display did not move as far or as fast as the eye. Peak eye velocity decreased as Kf decreased, suggesting that there was a continuous competitive interaction between the volitional drive and the visual feedback provided by the relative motion of the display with respect to the retina. These results support the evidence for two separate mechanisms of smooth eye movement control in ocular pursuit: reflex control from retinal velocity error feedback and volitional control from an internal source. Arguments are presented to indicate how smooth pursuit may be controlled by matching a voluntarily initiated estimate of the required smooth movement, normally derived from storage of past re-afferent information, against current visual feedback information. Such a mechanism allows preemptive smooth eye movements to be made that can overcome the inherent delays in the visual feedback pathway.

  11. Spatio-temporal patterns of Barmah Forest virus disease in Queensland, Australia.

    PubMed

    Naish, Suchithra; Hu, Wenbiao; Mengersen, Kerrie; Tong, Shilu

    2011-01-01

    Barmah Forest virus (BFV) disease is a common and wide-spread mosquito-borne disease in Australia. This study investigated the spatio-temporal patterns of BFV disease in Queensland, Australia using geographical information system (GIS) tools and geostatistical analysis. We calculated the incidence rates and standardised incidence rates of BFV disease. Moran's I statistic was used to assess the spatial autocorrelation of BFV incidences. Spatial dynamics of BFV disease was examined using semi-variogram analysis. Interpolation techniques were applied to visualise and display the spatial distribution of BFV disease in statistical local areas (SLAs) throughout Queensland. Mapping of BFV disease by SLAs reveals the presence of substantial spatio-temporal variation over time. Statistically significant differences in BFV incidence rates were identified among age groups (χ(2) = 7587, df = 7327,p<0.01). There was a significant positive spatial autocorrelation of BFV incidence for all four periods, with the Moran's I statistic ranging from 0.1506 to 0.2901 (p<0.01). Semi-variogram analysis and smoothed maps created from interpolation techniques indicate that the pattern of spatial autocorrelation was not homogeneous across the state. This is the first study to examine spatial and temporal variation in the incidence rates of BFV disease across Queensland using GIS and geostatistics. The BFV transmission varied with age and gender, which may be due to exposure rates or behavioural risk factors. There are differences in the spatio-temporal patterns of BFV disease which may be related to local socio-ecological and environmental factors. These research findings may have implications in the BFV disease control and prevention programs in Queensland.

  12. On the wall-normal velocity of the compressible boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1991-01-01

    Numerical methods for the compressible boundary-layer equations are facilitated by transformation from the physical (x,y) plane to a computational (xi,eta) plane in which the evolution of the flow is 'slow' in the time-like xi direction. The commonly used Levy-Lees transformation results in a computationally well-behaved problem for a wide class of non-similar boundary-layer flows, but it complicates interpretation of the solution in physical space. Specifically, the transformation is inherently nonlinear, and the physical wall-normal velocity is transformed out of the problem and is not readily recovered. In light of recent research which shows mean-flow non-parallelism to significantly influence the stability of high-speed compressible flows, the contribution of the wall-normal velocity in the analysis of stability should not be routinely neglected. Conventional methods extract the wall-normal velocity in physical space from the continuity equation, using finite-difference techniques and interpolation procedures. The present spectrally-accurate method extracts the wall-normal velocity directly from the transformation itself, without interpolation, leaving the continuity equation free as a check on the quality of the solution. The present method for recovering wall-normal velocity, when used in conjunction with a highly-accurate spectral collocation method for solving the compressible boundary-layer equations, results in a discrete solution which is extraordinarily smooth and accurate, and which satisfies the continuity equation nearly to machine precision. These qualities make the method well suited to the computation of the non-parallel mean flows needed by spatial direct numerical simulations (DNS) and parabolized stability equation (PSE) approaches to the analysis of stability.

  13. Systematic interpolation method predicts protein chromatographic elution with salt gradients, pH gradients and combined salt/pH gradients.

    PubMed

    Creasy, Arch; Barker, Gregory; Carta, Giorgio

    2017-03-01

    A methodology is presented to predict protein elution behavior from an ion exchange column using both individual or combined pH and salt gradients based on high-throughput batch isotherm data. The buffer compositions are first optimized to generate linear pH gradients from pH 5.5 to 7 with defined concentrations of sodium chloride. Next, high-throughput batch isotherm data are collected for a monoclonal antibody on the cation exchange resin POROS XS over a range of protein concentrations, salt concentrations, and solution pH. Finally, a previously developed empirical interpolation (EI) method is extended to describe protein binding as a function of the protein and salt concentration and solution pH without using an explicit isotherm model. The interpolated isotherm data are then used with a lumped kinetic model to predict the protein elution behavior. Experimental results obtained for laboratory scale columns show excellent agreement with the predicted elution curves for both individual or combined pH and salt gradients at protein loads up to 45 mg/mL of column. Numerical studies show that the model predictions are robust as long as the isotherm data cover the range of mobile phase compositions where the protein actually elutes from the column. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Digital micromirror device as amplitude diffuser for multiple-plane phase retrieval

    NASA Astrophysics Data System (ADS)

    Abregana, Timothy Joseph T.; Hermosa, Nathaniel P.; Almoro, Percival F.

    2017-06-01

    Previous implementations of the phase diffuser used in the multiple-plane phase retrieval method included a diffuser glass plate with fixed optical properties or a programmable yet expensive spatial light modulator. Here a model for phase retrieval based on a digital micromirror device as amplitude diffuser is presented. The technique offers programmable, convenient and low-cost amplitude diffuser for a non-stagnating iterative phase retrieval. The technique is demonstrated in the reconstructions of smooth object wavefronts.

  15. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  16. Certified dual-corrected radiation patterns of phased antenna arrays by offline–online order reduction of finite-element models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de

    2015-10-15

    This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.

  17. Influence of slot number and pole number in fault-tolerant brushless dc motors having unequal tooth widths

    NASA Astrophysics Data System (ADS)

    Ishak, D.; Zhu, Z. Q.; Howe, D.

    2005-05-01

    The electromagnetic performance of fault-tolerant three-phase permanent magnet brushless dc motors, in which the wound teeth are wider than the unwound teeth and their tooth tips span approximately one pole pitch and which have similar numbers of slots and poles, is investigated. It is shown that they have a more trapezoidal phase back-emf wave form, a higher torque capability, and a lower torque ripple than similar fault-tolerant machines with equal tooth widths. However, these benefits gradually diminish as the pole number is increased, due to the effect of interpole leakage flux.

  18. Ascorbate transport in pig coronary artery smooth muscle: Na(+) removal and oxidative stress increase loss of accumulated cellular ascorbate.

    PubMed

    Holmes, M E; Samson, S E; Wilson, J X; Dixon, S J; Grover, A K

    2000-01-01

    Pig deendothelialized coronary artery rings and smooth muscle cells cultured from them accumulated ascorbate from medium containing Na(+). The accumulated material was determined to be ascorbate using high-performance liquid chromatography. We further characterized ascorbate uptake in the cultured cells. The data fitted best with a Hill coefficient of 1 for ascorbate (K(asc) = 22 +/- 2 microM) and 2 for Na(+) (K(Na) = 84 +/- 10 mM). The anion transport inhibitors sulfinpyrazone and 4,4'-diisothiocyanatostilbene-2,2'-disulfonate (DIDS) inhibited the uptake. Transferring cultured cells loaded with (14)C-ascorbate into an ascorbate-free solution resulted in a biphasic loss of radioactivity - an initial sulfinpyrazone-insensitive faster phase and a late sulfinpyrazone-sensitive slower phase. Transferring loaded cells into a Na(+)-free medium increased the loss in the initial phase in a sulfinpyrazone-sensitive manner, suggesting that the ascorbate transporter is bidirectional. Including peroxide or superoxide in the solution increased the loss of radioactivity. Thus, ascorbate accumulated in coronary artery smooth muscle cells by a Na(+)-dependent transporter was lost in an ascorbate-free solution, and the loss was increased by removing Na(+) from the medium or by oxidative stress. Copyright 2000 S. Karger AG, Basel

  19. Geometric phase of mixed states for three-level open systems

    NASA Astrophysics Data System (ADS)

    Jiang, Yanyan; Ji, Y. H.; Xu, Hualan; Hu, Li-Yun; Wang, Z. S.; Chen, Z. Q.; Guo, L. P.

    2010-12-01

    Geometric phase of mixed state for three-level open system is defined by establishing in connecting density matrix with nonunit vector ray in a three-dimensional complex Hilbert space. Because the geometric phase depends only on the smooth curve on this space, it is formulated entirely in terms of geometric structures. Under the limiting of pure state, our approach is in agreement with the Berry phase, Pantcharatnam phase, and Aharonov and Anandan phase. We find that, furthermore, the Berry phase of mixed state correlated to population inversions of three-level open system.

  20. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  1. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  2. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  3. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  4. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  5. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  6. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  7. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  8. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  9. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  10. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  11. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  12. Modified geometrical optics of a smoothly inhomogeneous isotropic medium: the anisotropy, Berry phase, and the optical Magnus effect.

    PubMed

    Bliokh, K Yu; Bliokh, Yu P

    2004-08-01

    We present a modification of the geometrical optics method, which allows one to properly separate the complex amplitude and the phase of the wave solution. Appling this modification to a smoothly inhomogeneous isotropic medium, we show that in the first geometrical optics approximation the medium is weakly anisotropic. The refractive index, being dependent on the direction of the wave vector, contains the correction, which is proportional to the Berry geometric phase. Two independent eigenmodes of right-hand and left-hand circular polarizations exist in the medium. Their group velocities and phase velocities differ. The difference in the group velocities results in the shift of the rays of different polarizations (the optical Magnus effect). The difference in the phase velocities causes an increase of the Berry phase along with the interference of two modes leading to the familiar Rytov law about the rotation of the polarization plane of a wave. The theory developed suggests that both the optical Magnus effect and the Berry phase are accompanying nonlocal topological effects. In this paper the Hamilton ray equations giving a unified description for both of these phenomena have been derived and also a novel splitting effect for a ray of noncircular polarization has been predicted. Specific examples are also discussed.

  13. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  14. Super-resolution technique for CW lidar using Fourier transform reordering and Richardson-Lucy deconvolution.

    PubMed

    Campbell, Joel F; Lin, Bing; Nehrir, Amin R; Harrison, F Wallace; Obland, Michael D

    2014-12-15

    An interpolation method is described for range measurements of high precision altimetry with repeating intensity modulated continuous wave (IM-CW) lidar waveforms using binary phase shift keying (BPSK), where the range profile is determined by means of a cross-correlation between the digital form of the transmitted signal and the digitized return signal collected by the lidar receiver. This method uses reordering of the array elements in the frequency domain to convert a repeating synthetic pulse signal to single highly interpolated pulse. This is then enhanced further using Richardson-Lucy deconvolution to greatly enhance the resolution of the pulse. We show the sampling resolution and pulse width can be enhanced by about two orders of magnitude using the signal processing algorithms presented, thus breaking the fundamental resolution limit for BPSK modulation of a particular bandwidth and bit rate. We demonstrate the usefulness of this technique for determining cloud and tree canopy thicknesses far beyond this fundamental limit in a lidar not designed for this purpose.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Ying -Hsuan; Shao, Shu -Heng; Wang, Yifan

    We study up to 8-derivative terms in the Coulomb branch effective action of (1,1) little string theory, by collecting results of 4-gluon scattering amplitudes from both perturbative 6D super-Yang-Mills theory up to 4-loop order, and tree-level double scaled little string theory (DSLST). In previous work we have matched the 6-derivative term from the 6D gauge theory to DSLST, indicating that this term is protected on the entire Coulomb branch. The 8-derivative term, on the other hand, is unprotected. In this paper we compute the 8-derivative term by interpolating from the two limits, near the origin and near the infinity onmore » the Coulomb branch, numerically from SU(k) SYM and DSLST respectively, for k=2,3,4,5. We discuss the implication of this result on the UV completion of 6D SYM as well as the strong coupling completion of DSLST. As a result, we also comment on analogous interpolating functions in the Coulomb phase of circle-compactified (2,0) little string theory.« less

  16. Lithospheric layering in the North American craton revealed by including Short Period Constraints in Full Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2017-12-01

    Recent receiver function studies of the North American craton suggest the presence of significant layering within the cratonic lithosphere, with significant lateral variations in the depth of the velocity discontinuities. These structural boundaries have been confirmed recently using a transdimensional Markov Chain Monte Carlo approach (TMCMC), inverting surface wave dispersion data and converted phases simultaneously (Calò et al., 2016; Roy and Romanowicz 2017). The lateral resolution of upper mantle structure can be improved with a high density of broadband seismic stations, or with a sparse network using full waveform inversion based on numerical wavefield computation methods such as the Spectral Element Method (SEM). However, inverting for discontinuities with strong topography such as MLDS's or LAB, presents challenges in an inversion framework, both computationally, due to the short periods required, and from the point of view of stability of the inversion. To overcome these limitations, and to improve resolution of layering in the upper mantle, we are developing a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. We have extended the 30 1D radially anisotropic shear velocity profiles of Calò et al. 2016 to several other stations, for which we used a recent shear velocity model (Clouzet et al., 2017) as constraint in the modeling. These 1D profiles, including both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) homogenization of the layered 1D models and 2) interpolation between the 1D smooth profiles and the model of Clouzet et al. 2017, resulting in a smooth 3D starting model. Waveforms used in the inversion are filtered at periods longer than 30s. We use the SEM code "RegSEM" for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. The resulting volumetric velocity perturbations around the homogenized starting model are then added to the discontinuous 3D starting model by dehomogenizing the model. We present here the first results of such an approach for refining structure in the North American continent.

  17. MRI Features of Hepatocellular Carcinoma Related to Biologic Behavior

    PubMed Central

    Cho, Eun-Suk

    2015-01-01

    Imaging studies including magnetic resonance imaging (MRI) play a crucial role in the diagnosis and staging of hepatocellular carcinoma (HCC). Several recent studies reveal a large number of MRI features related to the prognosis of HCC. In this review, we discuss various MRI features of HCC and their implications for the diagnosis and prognosis as imaging biomarkers. As a whole, the favorable MRI findings of HCC are small size, encapsulation, intralesional fat, high apparent diffusion coefficient (ADC) value, and smooth margins or hyperintensity on the hepatobiliary phase of gadoxetic acid-enhanced MRI. Unfavorable findings include large size, multifocality, low ADC value, non-smooth margins or hypointensity on hepatobiliary phase images. MRI findings are potential imaging biomarkers in patients with HCC. PMID:25995679

  18. The OpenCalphad thermodynamic software interface.

    PubMed

    Sundman, Bo; Kattner, Ursula R; Sigli, Christophe; Stratmann, Matthias; Le Tellier, Romain; Palumbo, Mauro; Fries, Suzana G

    2016-12-01

    Thermodynamic data are needed for all kinds of simulations of materials processes. Thermodynamics determines the set of stable phases and also provides chemical potentials, compositions and driving forces for nucleation of new phases and phase transformations. Software to simulate materials properties needs accurate and consistent thermodynamic data to predict metastable states that occur during phase transformations. Due to long calculation times thermodynamic data are frequently pre-calculated into "lookup tables" to speed up calculations. This creates additional uncertainties as data must be interpolated or extrapolated and conditions may differ from those assumed for creating the lookup table. Speed and accuracy requires that thermodynamic software is fully parallelized and the Open-Calphad (OC) software is the first thermodynamic software supporting this feature. This paper gives a brief introduction to computational thermodynamics and introduces the basic features of the OC software and presents four different application examples to demonstrate its versatility.

  19. The OpenCalphad thermodynamic software interface

    PubMed Central

    Sundman, Bo; Kattner, Ursula R; Sigli, Christophe; Stratmann, Matthias; Le Tellier, Romain; Palumbo, Mauro; Fries, Suzana G

    2017-01-01

    Thermodynamic data are needed for all kinds of simulations of materials processes. Thermodynamics determines the set of stable phases and also provides chemical potentials, compositions and driving forces for nucleation of new phases and phase transformations. Software to simulate materials properties needs accurate and consistent thermodynamic data to predict metastable states that occur during phase transformations. Due to long calculation times thermodynamic data are frequently pre-calculated into “lookup tables” to speed up calculations. This creates additional uncertainties as data must be interpolated or extrapolated and conditions may differ from those assumed for creating the lookup table. Speed and accuracy requires that thermodynamic software is fully parallelized and the Open-Calphad (OC) software is the first thermodynamic software supporting this feature. This paper gives a brief introduction to computational thermodynamics and introduces the basic features of the OC software and presents four different application examples to demonstrate its versatility. PMID:28260838

  20. Single-shot three-dimensional reconstruction based on structured light line pattern

    NASA Astrophysics Data System (ADS)

    Wang, ZhenZhou; Yang, YongMing

    2018-07-01

    Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.

  1. Method for utilizing properties of the sinc(x) function for phase retrieval on nyquist-under-sampled data

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor); Smith, Jeffrey Scott (Inventor); Aronstein, David L. (Inventor)

    2012-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for simulating propagation of an electromagnetic field, performing phase retrieval, or sampling a band-limited function. A system practicing the method generates transformed data using a discrete Fourier transform which samples a band-limited function f(x) without interpolating or modifying received data associated with the function f(x), wherein an interval between repeated copies in a periodic extension of the function f(x) obtained from the discrete Fourier transform is associated with a sampling ratio Q, defined as a ratio of a sampling frequency to a band-limited frequency, and wherein Q is assigned a value between 1 and 2 such that substantially no aliasing occurs in the transformed data, and retrieves a phase in the received data based on the transformed data, wherein the phase is used as feedback to an optical system.

  2. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  3. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  5. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  6. Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media

    DTIC Science & Technology

    2015-09-24

    TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR

  7. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  8. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  9. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  10. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  11. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  12. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  13. Experimental demonstration of laser imprint reduction using underdense foams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delorme, B.; Casner, A.; CELIA, University of Bordeaux-CNRS-CEA, F-33400 Talence

    2016-04-15

    Reducing the detrimental effect of the Rayleigh-Taylor (RT) instability on the target performance is a critical challenge. In this purpose, the use of targets coated with low density foams is a promising approach to reduce the laser imprint. This article presents results of ablative RT instability growth measurements, performed on the OMEGA laser facility in direct-drive for plastic foils coated with underdense foams. The laser beam smoothing is explained by the parametric instabilities developing in the foam and reducing the laser imprint on the plastic (CH) foil. The initial perturbation pre-imposed by the means of a specific phase plate wasmore » shown to be smoothed using different foam characteristics. Numerical simulations of the laser beam smoothing in the foam and of the RT growth are performed with a suite of paraxial electromagnetic and radiation hydrodynamic codes. They confirmed the foam smoothing effect in the experimental conditions.« less

  14. THE EFFECT OF SMOOTH MUSCLE ON THE INTERCELLULAR SPACES IN TOAD URINARY BLADDER

    PubMed Central

    DiBona, Donald R.; Civan, Mortimer M.

    1970-01-01

    Phase microscopy of toad urinary bladder has demonstrated that vasopressin can cause an enlargement of the epithelial intercellular spaces under conditions of no net transfer of water or sodium. The suggestion that this phenomenon is linked to the hormone's action as a smooth muscle relaxant has been tested and verified with the use of other agents effecting smooth muscle: atropine and adenine compounds (relaxants), K+ and acetylcholine (contractants). Furthermore, it was possible to reduce the size and number of intercellular spaces, relative to a control, while increasing the rate of osmotic water flow. A method for quantifying these results has been developed and shows that they are, indeed, significant. It is concluded, therefore, that the configuration of intercellular spaces is not a reliable index of water flow across this epithelium and that such a morphologic-physiologic relationship is tenuous in any epithelium supported by a submucosa rich in smooth muscle. PMID:4915450

  15. Performance Simulation & Engineering Analysis/Design and Verification of a Shock Mitigation System for a Rover Landing on Mars

    NASA Astrophysics Data System (ADS)

    Ullio, Roberto; Gily, Alessandro; Jones, Howard; Geelen, Kelly; Larranaga, Jonan

    2014-06-01

    In the frame of the ESA Mars Robotic Exploration Preparation (MREP) programme and within its Technology Development Plan [1] the activity "E913- 007MM Shock Mitigation Operating Only at Touch- down by use of minimalist/dispensable Hardware" (SMOOTH) was conducted under the framework of Rover technologies and to support the ESA MREP Mars Precision Lander (MPL) Phase A system study with the objectives to:• study the behaviour of the Sample Fetching Rover (SFR) landing on Mars on its wheels• investigate and implement into the design of the SFR Locomotion Sub-System (LSS) an impact energy absorption system (SMOOTH)• verify by simulation the performances of SMOOTH The main purpose of this paper is to present the obtained numerical simulation results and to explain how these results have been utilized first to iterate on the design of the SMOOTH concept and then to validate its performances.

  16. Real-space Berry phases: Skyrmion soccer (invited)

    NASA Astrophysics Data System (ADS)

    Everschor-Sitte, Karin; Sitte, Matthias

    2014-05-01

    Berry phases occur when a system adiabatically evolves along a closed curve in parameter space. This tutorial-like article focuses on Berry phases accumulated in real space. In particular, we consider the situation where an electron traverses a smooth magnetic structure, while its magnetic moment adjusts to the local magnetization direction. Mapping the adiabatic physics to an effective problem in terms of emergent fields reveals that certain magnetic textures, skyrmions, are tailormade to study these Berry phase effects.

  17. Real-space Berry phases: Skyrmion soccer (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Everschor-Sitte, Karin, E-mail: karin@physics.utexas.edu; Sitte, Matthias

    Berry phases occur when a system adiabatically evolves along a closed curve in parameter space. This tutorial-like article focuses on Berry phases accumulated in real space. In particular, we consider the situation where an electron traverses a smooth magnetic structure, while its magnetic moment adjusts to the local magnetization direction. Mapping the adiabatic physics to an effective problem in terms of emergent fields reveals that certain magnetic textures, skyrmions, are tailormade to study these Berry phase effects.

  18. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  19. Butylated Hydroxyanisole Stimulates Heme Oxygenase-1 Gene Expression and Inhibits Neointima Formation in Rat Arteries

    PubMed Central

    Liu, Xiao-ming; Azam, Mohammed A.; Peyton, Kelly J.; Ensenat, Diana; Keswani, Amit N.; Wang, Hong; Durante, William

    2007-01-01

    Objective Butylated hydroxyanisole (BHA) is a synthetic phenolic compound that is a potent inducer of phase II genes. Since heme oxygenase-1 (HO-1) is a vasoprotective protein that is upregulated by phase II inducers, the present study examined the effects of BHA on HO-1 gene expression and vascular smooth muscle cell proliferation. Methods The regulation of HO-1 gene expression and vascular cell growth by BHA was studied in cultured rat aortic smooth muscle cells and in balloon injured rat carotid arteries. Results Treatment of cultured smooth muscle cells with BHA stimulated the expression of HO-1 protein, mRNA and promoter activity in a time- and concentration-dependent manner. BHA-mediated HO-1 expression was dependent on the activation of NF-E2-related factor-2 by p38 mitogen-activated protein kinase. BHA also inhibited cell cycle progression and DNA synthesis in a HO-1-dependent manner. In addition, the local perivascular delivery of BHA immediately after arterial injury of rat carotid arteries induced HO-1 protein expression and markedly attenuated neointima formation. Conclusions These studies demonstrate that BHA stimulates HO-1 gene expression in vascular smooth muscle cells, and that the induction of HO-1 contributes to the antiproliferative actions of this phenolic antioxidant. BHA represents a potentially novel therapeutic agent in treating or preventing vasculoproliferative disease. PMID:17320844

  20. The Laguerre finite difference one-way equation solver

    NASA Astrophysics Data System (ADS)

    Terekhov, Andrew V.

    2017-05-01

    This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations. As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. After carrying out the approximation of spatial variables it is possible to obtain systems of linear algebraic equations with better computing properties and to reduce computer costs for their solution. High accuracy of calculations is attained at the expense of employing finite difference approximations of higher accuracy order that are based on the dispersion-relationship-preserving method and the Richardson extrapolation in the downward continuation direction. The numerical experiments have verified that as compared to the spectral difference method based on Fourier transform, the new algorithm allows one to calculate wave fields with a higher degree of accuracy and a lower level of numerical noise and artifacts including those for non-smooth velocity models. In the context of solving the geophysical problem the post-stack migration for velocity models of the types Syncline and Sigsbee2A has been carried out. It is shown that the images obtained contain lesser noise and are considerably better focused as compared to those obtained by the known Fourier Finite Difference and Phase-Shift Plus Interpolation methods. There is an opinion that purely finite difference approaches do not allow carrying out the seismic migration procedure with sufficient accuracy, however the results obtained disprove this statement. For the supercomputer implementation it is proposed to use the parallel dichotomy algorithm when solving systems of linear algebraic equations with block-tridiagonal matrices.

  1. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  2. CAD-centric Computation Management System for a Virtual TBM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of themore » analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.« less

  3. Comparison of method using phase-sensitive motion estimator with speckle tracking method and application to measurement of arterial wall motion

    NASA Astrophysics Data System (ADS)

    Miyajo, Akira; Hasegawa, Hideyuki

    2018-07-01

    At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.

  4. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests.

    PubMed

    Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki

    2015-09-04

    Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull-Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm   that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON).

  5. On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians

    NASA Astrophysics Data System (ADS)

    Valverde, Clodoaldo; Baseia, Basílio

    2018-01-01

    We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.

  6. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  7. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  8. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  9. Liquid-liquid phase transformations and the shape of the melting curve.

    PubMed

    Makov, G; Yahel, E

    2011-05-28

    The phase diagram of elemental liquids has been found to be surprisingly rich, including variations in the melting curve and transitions in the liquid phase. The effect of these transitions in the liquid state on the shape of the melting curve is analyzed. First-order phase transitions intersecting the melting curve imply piecewise continuous melting curves, with solid-solid transitions generating upward kinks or minima and liquid-liquid transitions generating downward kinks or maxima. For liquid-liquid phase transitions proposed for carbon, phosphorous selenium, and possibly nitrogen, we find that the melting curve exhibits a kink. Continuous transitions imply smooth extrema in the melting curve, the curvature of which is described by an exact thermodynamic relation. This expression indicates that a minimum in the melting curve requires the solid compressibility to be greater than that of the liquid, a very unusual situation. This relation is employed to predict the loci of smooth maxima at negative pressures for liquids with anomalous melting curves. The relation between the location of the melting curve maximum and the two-state model of continuous liquid-liquid transitions is discussed and illustrated by the case of tellurium. © 2011 American Institute of Physics

  10. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  11. Clinical Feasibility of Free-Breathing Dynamic T1-Weighted Imaging With Gadoxetic Acid-Enhanced Liver Magnetic Resonance Imaging Using a Combination of Variable Density Sampling and Compressed Sensing.

    PubMed

    Yoon, Jeong Hee; Yu, Mi Hye; Chang, Won; Park, Jin-Young; Nickel, Marcel Dominik; Son, Yohan; Kiefer, Berthold; Lee, Jeong Min

    2017-10-01

    The purpose of the study was to investigate the clinical feasibility of free-breathing dynamic T1-weighted imaging (T1WI) using Cartesian sampling, compressed sensing, and iterative reconstruction in gadoxetic acid-enhanced liver magnetic resonance imaging (MRI). This retrospective study was approved by our institutional review board, and the requirement for informed consent was waived. A total of 51 patients at high risk of breath-holding failure underwent dynamic T1WI in a free-breathing manner using volumetric interpolated breath-hold (BH) examination with compressed sensing reconstruction (CS-VIBE) and hard gating. Timing, motion artifacts, and image quality were evaluated by 4 radiologists on a 4-point scale. For patients with low image quality scores (<3) on the late arterial phase, respiratory motion-resolved (extradimension [XD]) reconstruction was additionally performed and reviewed in the same manner. In addition, in 68.6% (35/51) patients who had previously undergone liver MRI, image quality and motion artifacts on dynamic phases using CS-VIBE were compared with previous BH-T1WIs. In all patients, adequate arterial-phase timing was obtained at least once. Overall image quality of free-breathing T1WI was 3.30 ± 0.59 on precontrast and 2.68 ± 0.70, 2.93 ± 0.65, and 3.30 ± 0.49 on early arterial, late arterial, and portal venous phases, respectively. In 13 patients with lower than average image quality (<3) on the late arterial phase, motion-resolved reconstructed T1WI (XD-reconstructed CS-VIBE) significantly reduced motion artifacts (P < 0.002-0.021) and improved image quality (P < 0.0001-0.002). In comparison with previous BH-T1WI, CS-VIBE with hard gating or XD reconstruction showed less motion artifacts and better image quality on precontrast, arterial, and portal venous phases (P < 0.0001-0.013). Volumetric interpolated breath-hold examination with compressed sensing has the potential to provide consistent, motion-corrected free-breathing dynamic T1WI for liver MRI in patients at high risk of breath-holding failure.

  12. Reconstruction of Arctic surface temperature in past 100 years using DINEOF

    NASA Astrophysics Data System (ADS)

    Zhang, Qiyi; Huang, Jianbin; Luo, Yong

    2015-04-01

    Global annual mean surface temperature has not risen apparently since 1998, which is described as global warming hiatus in recent years. However, measuring of temperature variability in Arctic is difficult because of large gaps in coverage of Arctic region in most observed gridded datasets. Since Arctic has experienced a rapid temperature change in recent years that called polar amplification, and temperature risen in Arctic is faster than global mean, the unobserved temperature in central Arctic will result in cold bias in both global and Arctic temperature measurement compared with model simulations and reanalysis datasets. Moreover, some datasets that have complete coverage in Arctic but short temporal scale cannot show Arctic temperature variability for long time. Data Interpolating Empirical Orthogonal Function (DINEOF) were applied to fill the coverage gap of NASA's Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP 250km smooth) product in Arctic with IABP dataset which covers entire Arctic region between 1979 and 1998, and to reconstruct Arctic temperature in 1900-2012. This method provided temperature reconstruction in central Arctic and precise estimation of both global and Arctic temperature variability with a long temporal scale. Results have been verified by extra independent station records in Arctic by statistical analysis, such as variance and standard deviation. The result of reconstruction shows significant warming trend in Arctic in recent 30 years, as the temperature trend in Arctic since 1997 is 0.76°C per decade, compared with 0.48°C and 0.67°C per decade from 250km smooth and 1200km smooth of GISTEMP. And global temperature trend is two times greater after using DINEOF. The discrepancies above stress the importance of fully consideration of temperature variance in Arctic because gaps of coverage in Arctic cause apparent cold bias in temperature estimation. The result of global surface temperature also proves that global warming in recent years is not as slow as thought.

  13. A 156 kyr smoothed history of the atmospheric greenhouse gases CO2, CH4, and N2O and their radiative forcing

    NASA Astrophysics Data System (ADS)

    Köhler, Peter; Nehrbass-Ahles, Christoph; Schmitt, Jochen; Stocker, Thomas F.; Fischer, Hubertus

    2017-06-01

    Continuous records of the atmospheric greenhouse gases (GHGs) CO2, CH4, and N2O are necessary input data for transient climate simulations, and their associated radiative forcing represents important components in analyses of climate sensitivity and feedbacks. Since the available data from ice cores are discontinuous and partly ambiguous, a well-documented decision process during data compilation followed by some interpolating post-processing is necessary to obtain those desired time series. Here, we document our best possible data compilation of published ice core records and recent measurements on firn air and atmospheric samples spanning the interval from the penultimate glacial maximum ( ˜ 156 kyr BP) to the beginning of the year 2016 CE. We use the most recent age scales for the ice core data and apply a smoothing spline method to translate the discrete and irregularly spaced data points into continuous time series. These splines are then used to compute the radiative forcing for each GHG using well-established, simple formulations. We compile only a Southern Hemisphere record of CH4 and discuss how much larger a Northern Hemisphere or global CH4 record might have been due to its interpolar difference. The uncertainties of the individual data points are considered in the spline procedure. Based on the given data resolution, time-dependent cutoff periods of the spline, defining the degree of smoothing, are prescribed, ranging from 5000 years for the less resolved older parts of the records to 4 years for the densely sampled recent years. The computed splines seamlessly describe the GHG evolution on orbital and millennial timescales for glacial and glacial-interglacial variations and on centennial and decadal timescales for anthropogenic times. Data connected with this paper, including raw data and final splines, are available at doi:10.1594/PANGAEA.871273.

  14. Buried Craters In Isidis Planitia, Mars

    NASA Astrophysics Data System (ADS)

    Seabrook, A. M.; Rothery, D. A.; Wallis, D.; Bridges, J. C.; Wright, I. P.

    We have produced a topographic map of Isidis Planitia, which includes the Beagle 2 landing site, using interpolated Mars Orbiter Laser Altimeter (MOLA) data from the Mars Global Surveyor (MGS) spacecraft currently orbiting Mars. MOLA data have a vertical precision of 37.5 cm, a footprint size of 130 m, an along-track shot spacing of 330 m, and an across-track spacing that is variable and may be several kilometres. This has revealed subtle topographic detail within the relatively smooth basin of Isidis Planitia. Analysis of this map shows apparent wrinkle ridges that could be the volcanic basement to the basin and also several circular depressions with diameters of several to tens of kilometres which we interpreted as buried impact craters, comparable to the so-called stealth craters recognised elsewhere in the northern lowlands of Mars[1]. Stealth craters are considered to be impact craters subjected to erosion and/or burial. Some of these features in Isidis have depressions that are on the order of tens metres lower than their rims and are very smooth, and so are often not visible in MGS Mars Orbiter Camera (MOC) or Viking images of the basin. The Isidis stealth craters are not restricted to the Hesperian Vastitas Borealis formations like those detected elsewhere in the northern lowlands by Kreslavsky and Head [1], but are also found in a younger Amazonian smooth plains unit. It is generally believed that Isidis Planitia has undergone one or more episodes of sedi- ment deposition, and so these buried craters most likely lie on an earlier surface, which could be the postulated volcanic basement to the basin. Analysis of the buried craters may give some understanding of the thickness, frequencies and ages of sedimentation episodes within the basin. This information will be important in developing a context in which information from the Beagle 2 lander can be analysed when it arrives on Mars in December 2003. [1] Kreslavsky M. A. and Head J. W. (2001) LPS XXXII

  15. Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations

    DTIC Science & Technology

    2008-02-01

    Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates

  16. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  17. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  18. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  19. Aerodynamics of a translating comb-like plate inspired by a fairyfly wing

    NASA Astrophysics Data System (ADS)

    Lee, Seung Hun; Kim, Daegyoum

    2017-08-01

    Unlike the smooth wings of common insects or birds, micro-scale insects such as the fairyfly have a distinctive wing geometry, comprising a frame with several bristles. Motivated by this peculiar wing geometry, we experimentally investigated the flow structure of a translating comb-like wing for a wide range of gap size, angle of attack, and Reynolds number, Re = O(10) - O(103), and the correlation of these parameters with aerodynamic performance. The flow structures of a smooth plate without a gap and a comb-like plate are significantly different at high Reynolds number, while little difference was observed at the low Reynolds number of O(10). At low Reynolds number, shear layers that were generated at the edges of the tooth of the comb-like plate strongly diffuse and eventually block a gap. This gap blockage increases the effective surface area of the plate and alters the formation of leading-edge and trailing-edge vortices. As a result, the comb-like plate generates larger aerodynamic force per unit area than the smooth plate. In addition to a quasi-steady phase after the comb-like plate travels several chords, we also studied a starting phase of the shear layer development when the comb-like plate begins to translate from rest. While a plate with small gap size can generate aerodynamic force at the starting phase as effectively as at the quasi-steady phase, the aerodynamic force drops noticeably for a plate with a large gap because the diffusion of the developing shear layers is not enough to block the gap.

  20. Optoelectronic imaging of speckle using image processing method

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  1. Spectral Re-Growth Reduction for CCSDS 8-D 8-PSK TCM

    NASA Technical Reports Server (NTRS)

    Borah, Deva K.

    2002-01-01

    This report presents a study on the CCSDS recommended 8-dimensional 8 PSK Trellis Coded Modulation (TCM) scheme. The important steps of the CCSDS scheme include: conversion of serial data into parallel form, differential encoding, convolutional encoding, constellation mapping, and filtering the 8-PSK symbols using the square root raised cosine (SRRC) pulses. The last step, namely the filtering of the 8 PSK symbols using SRRC pulses, significantly affects the bandwidth of the signal. If a nonlinear power amplifier is used, the SRRC filtered signal creates spectral regrowth. The purpose of this report is to investigate a technique, called the smooth phase interpolated keying (SPIK), that can provide an alternative to SRRC filtering so that good spectral as well as power efficiencies can be obtained with the CCSDS encoder. The results of this study show that the CCSDS encoder does not affect the spectral shape of the SRRC filtered signal or the SPIK signal. When a nonlinear traveling wave tube amplifier (TWTA) is used, the spectral performance of the SRRC signal degrades significantly while the spectral performance of SPIK remains unaffected. The degrading effect of a nonlinear solid state power amplifier (SSPA) on SRRC is found to be less than that due to a nonlinear TWTA. However, in both cases, the spectral performance of the SRRC modulated signal is worse than that of the SPIK signal. The bit error rate (BER) performance of the SRRC signal in a linear amplifier environment is about 2.5 dB better than that of the SPIK signal when both the receivers use algorithms of similar complexity. In a nonlinear TWTA environment, the SRRC signal requires accurate phase tracking since the TWTA introduces additional phase distortion. This problem does not arise with SPIK signal due to its constant envelope property. When a nonlinear amplifier is used, the SRRC method loses nearly 1 dB in the bit error rate performance. The SPIK signal does not lose any performance. Thus the performance gap between SRRC and SPIK reduces. The BER performance of SPIK can be improved even further by using a more optimal receiver. A similar optimal receiver for SRRC is quite complex since the amplifier distorts the pulse shape. However, this requires further investigation and is not covered in this report.

  2. Chloride channel blockers promote relaxation of TEA-induced contraction in airway smooth muscle.

    PubMed

    Yim, Peter D; Gallos, George; Perez-Zoghbi, Jose F; Trice, Jacquelyn; Zhang, Yi; Siviski, Matthew; Sonett, Joshua; Emala, Charles W

    2013-01-01

    Enhanced airway smooth muscle (ASM) contraction is an important component in the pathophysiology of asthma. We have shown that ligand gated chloride channels modulate ASM contractile tone during the maintenance phase of an induced contraction, however the role of chloride flux in depolarization-induced contraction remains incompletely understood. To better understand the role of chloride flux under these conditions, muscle force (human ASM, guinea pig ASM), peripheral small airway luminal area (rat ASM) and airway smooth muscle plasma membrane electrical potentials (human cultured ASM) were measured. We found ex vivo guinea pig airway rings, human ASM strips and small peripheral airways in rat lungs slices relaxed in response to niflumic acid following depolarization-induced contraction induced by K(+) channel blockade with tetraethylammonium chloride (TEA). In isolated human airway smooth muscle cells TEA induce depolarization as measured by a fluorescent indicator or whole cell patch clamp and this depolarization was reversed by niflumic acid. These findings demonstrate that ASM depolarization induced contraction is dependent on chloride channel activity. Targeting of chloride channels may be a novel approach to relax hypercontractile airway smooth muscle in bronchoconstrictive disorders.

  3. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  4. The P-factor and atomic mass systematics: Application to medium mass nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brenner, D.S.; Haustein, P.E.; Casten, R.F.

    1988-01-01

    The P formalism was applied to atomic mass systematics for medium and heavy nuclei. The P-factor linearizes the structure-dependent part of the nuclear mass in those regions which are free from subshell effects indicating that the attractive quadrupole p-n force plays an important role in determining the binding of valence nucleons. Where marked non-linearities occur, the P-factor provides a means for recognizing subshell closures and/or other structural features not embodied in the simple assumptions of abrupt shell or subshell changes. These are thought to be regions where the monopole part of the p-n interaction is highly orbit dependent and altersmore » the underlying single-particle structure as a function of A, N or Z. Finally, in those regions where the systematics are smooth and subshells are absent, the P-factor provides a means for predicting masses of some nuclei far-from-stability by interpolation rather than by extrapolation. 5 figs.« less

  5. Canny edge-based deformable image registration

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-01

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  6. An interactive grid generation procedure for axial and radial flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Beach, Timothy A.

    1989-01-01

    A combination algebraic/elliptic technique is presented for the generation of three dimensional grids about turbo-machinery blade rows for both axial and radial flow machinery. The technique is built around use of an advanced engineering workstation to construct several two dimensional grids interactively on predetermined blade-to-blade surfaces. A three dimensional grid is generated by interpolating these surface grids onto an axisymmetric grid. On each blade-to-blade surface, a grid is created using algebraic techniques near the blade to control orthogonality within the boundary layer region and elliptic techniques in the mid-passage to achieve smoothness. The interactive definition of bezier curves as internal boundaries is the key to simple construction. This procedure lends itself well to zonal grid construction, an important example being the tip clearance region. Calculations done to date include a space shuttle main engine turbopump blade, a radial inflow turbine blade, and the first stator of the United Technologies Research Center large scale rotating rig. A finite Navier-Stokes solver was used in each case.

  7. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  8. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2013-11-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  9. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 3: Advanced Fan Section Grid Generator Final Report and Computer Program User's Manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1991-01-01

    A procedure is studied for generating three-dimensional grids for advanced turbofan engine fan section geometries. The procedure constructs a discrete mesh about engine sections containing the fan stage, an arbitrary number of axisymmetric radial flow splitters, a booster stage, and a bifurcated core/bypass flow duct with guide vanes. The mesh is an h-type grid system, the points being distributed with a transfinite interpolation scheme with axial and radial spacing being user specified. Elliptic smoothing of the grid in the meridional plane is a post-process option. The grid generation scheme is consistent with aerodynamic analyses utilizing the average-passage equation system developed by Dr. John Adamczyk of NASA Lewis. This flow solution scheme requires a series of blade specific grids each having a common axisymmetric mesh, but varying in the circumferential direction according to the geometry of the specific blade row.

  10. Effect of link oriented self-healing on resilience of networks

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2016-08-01

    Many real, complex systems, such as the human brain and skin with their biological networks or intelligent material systems consisting of composite functional liquids, exhibit a noticeable capability of self-healing. Here, we study a network model with arbitrary degree distributions possessing natural link oriented recovery mechanisms, whereby a failed link can be recovered if its two end nodes maintain a sufficient proportion of functional links. These mechanisms are pertinent for many spontaneous healing and manual repair phenomena, interpolating smoothly between complete healing and no healing scenarios. We show that the self-healing strategies have profound impact on resilience of homogeneous and heterogeneous networks employing a percolation threshold, fraction of giant cluster, and link robustness index. The self-healing effect induces distinct resilience characteristics for scale-free networks under random failures and intentional attacks, and a resilience crossover has been observed at certain level of self-healing. Our work highlights the significance of understanding the competition between healing and collapsing in the resilience of complex networks.

  11. Use of MAGSAT anomaly data for crustal structure and mineral resources in the US Midcontinent

    NASA Technical Reports Server (NTRS)

    Carmichael, R. S. (Principal Investigator)

    1981-01-01

    The analysis and preliminary interpretation of investigator-B MAGSAT data are addressed. The data processing included: (1) removal of spurious data points; (2) statistical smoothing along individual data tracks, to reduce the effect of geomagnetic transient disturbances; (3) comparison of data profiles spatially coincident in track location but acquired at different times; (4) reduction of data by weighted averaging to a grid with 1 deg xl deg latitude/longitude spacing, and with elevations interpolated and weighted to a common datum of 400 km; (5) wavelength filtering; and (6) reduction of the anomaly map to the magnetic pole. Agreement was found between a magnitude data anomaly map and a reduce-to-the-pole map supporting the general assumption that, on a large scale (long wavelength), it is induced crustal magnetization which is responsible for major anamalies. Anomalous features are identified and explanations are suggested with regard to crustal structure, petrologic characteristics, and Curie temperature isotherms.

  12. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  13. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  14. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  15. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  16. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  17. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  18. Quantum realization of the nearest neighbor value interpolation method for INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping

    2018-07-01

    This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.

  19. Producing a scale-invariant spectrum of perturbations in a Hagedorn phase of string cosmology.

    PubMed

    Nayeri, Ali; Brandenberger, Robert H; Vafa, Cumrun

    2006-07-14

    We study the generation of cosmological perturbations during the Hagedorn phase of string gas cosmology. Using tools of string thermodynamics we provide indications that it may be possible to obtain a nearly scale-invariant spectrum of cosmological fluctuations on scales which are of cosmological interest today. In our cosmological scenario, the early Hagedorn phase of string gas cosmology goes over smoothly into the radiation-dominated phase of standard cosmology, without having a period of cosmological inflation.

  20. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  1. Evaluation of an auditory model for echo delay accuracy in wideband biosonar.

    PubMed

    Sanderson, Mark I; Neretti, Nicola; Intrator, Nathan; Simmons, James A

    2003-09-01

    In a psychophysical task with echoes that jitter in delay, big brown bats can detect changes as small as 10-20 ns at an echo signal-to-noise ratio of approximately 49 dB and 40 ns at approximately 36 dB. This performance is possible to achieve with ideal coherent processing of the wideband echoes, but it is widely assumed that the bat's peripheral auditory system is incapable of encoding signal waveforms to represent delay with the requisite precision or phase at ultrasonic frequencies. This assumption was examined by modeling inner-ear transduction with a bank of parallel bandpass filters followed by low-pass smoothing. Several versions of the filterbank model were tested to learn how the smoothing filters, which are the most critical parameter for controlling the coherence of the representation, affect replication of the bat's performance. When tested at a signal-to-noise ratio of 36 dB, the model achieved a delay acuity of 83 ns using a second-order smoothing filter with a cutoff frequency of 8 kHz. The same model achieved a delay acuity of 17 ns when tested with a signal-to-noise ratio of 50 dB. Jitter detection thresholds were an order of magnitude worse than the bat for fifth-order smoothing or for lower cutoff frequencies. Most surprising is that effectively coherent reception is possible with filter cutoff frequencies well below any of the ultrasonic frequencies contained in the bat's sonar sounds. The results suggest that only a modest rise in the frequency response of smoothing in the bat's inner ear can confer full phase sensitivity on subsequent processing and account for the bat's fine acuity or delay.

  2. Smooth muscle contraction: mechanochemical formulation for homogeneous finite strains.

    PubMed

    Stålhand, J; Klarbring, A; Holzapfel, G A

    2008-01-01

    Chemical kinetics of smooth muscle contraction affect mechanical properties of organs that function under finite strains. In an effort to gain further insight into organ physiology, we formulate a mechanochemical finite strain model by considering the interaction between mechanical and biochemical components of cell function during activation. We propose a new constitutive framework and use a mechanochemical device that consists of two parallel elements: (i) spring for the cell stiffness; (ii) contractile element for the sarcomere. We use a multiplicative decomposition of cell elongation into filament contraction and cross-bridge deformation, and suggest that the free energy be a function of stretches, four variables (free unphosphorylated myosin, phosphorylated cross-bridges, phosphorylated and dephosphorylated cross-bridges attached to actin), chemical state variable driven by Ca2+-concentration, and temperature. The derived constitutive laws are thermodynamically consistent. Assuming isothermal conditions, we specialize the mechanical phase such that we recover the linear model of Yang et al. [2003a. The myogenic response in isolated rat cerebrovascular arteries: smooth muscle cell. Med. Eng. Phys. 25, 691-709]. The chemical phase is also specialized so that the linearized chemical evolution law leads to the four-state model of Hai and Murphy [1988. Cross-bridge phosphorylation and regulation of latch state in smooth muscle. Am. J. Physiol. 254, C99-C106]. One numerical example shows typical mechanochemical effects and the efficiency of the proposed approach. We discuss related parameter identification, and illustrate the dependence of muscle contraction (Ca2+-concentration) on active stress and related stretch. Mechanochemical models of this kind serve the mathematical basis for analyzing coupled processes such as the dependency of tissue properties on the chemical kinetics of smooth muscle.

  3. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.

    PubMed

    Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu

    2014-10-01

    Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.

  4. Spatio-Temporal Patterns of Barmah Forest Virus Disease in Queensland, Australia

    PubMed Central

    Naish, Suchithra; Hu, Wenbiao; Mengersen, Kerrie; Tong, Shilu

    2011-01-01

    Background Barmah Forest virus (BFV) disease is a common and wide-spread mosquito-borne disease in Australia. This study investigated the spatio-temporal patterns of BFV disease in Queensland, Australia using geographical information system (GIS) tools and geostatistical analysis. Methods/Principal Findings We calculated the incidence rates and standardised incidence rates of BFV disease. Moran's I statistic was used to assess the spatial autocorrelation of BFV incidences. Spatial dynamics of BFV disease was examined using semi-variogram analysis. Interpolation techniques were applied to visualise and display the spatial distribution of BFV disease in statistical local areas (SLAs) throughout Queensland. Mapping of BFV disease by SLAs reveals the presence of substantial spatio-temporal variation over time. Statistically significant differences in BFV incidence rates were identified among age groups (χ2 = 7587, df = 7327,p<0.01). There was a significant positive spatial autocorrelation of BFV incidence for all four periods, with the Moran's I statistic ranging from 0.1506 to 0.2901 (p<0.01). Semi-variogram analysis and smoothed maps created from interpolation techniques indicate that the pattern of spatial autocorrelation was not homogeneous across the state. Conclusions/Significance This is the first study to examine spatial and temporal variation in the incidence rates of BFV disease across Queensland using GIS and geostatistics. The BFV transmission varied with age and gender, which may be due to exposure rates or behavioural risk factors. There are differences in the spatio-temporal patterns of BFV disease which may be related to local socio-ecological and environmental factors. These research findings may have implications in the BFV disease control and prevention programs in Queensland. PMID:22022430

  5. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.

  6. Pixel-Level Decorrelation and BiLinearly Interpolated Subpixel Sensitivity applied to WASP-29b

    NASA Astrophysics Data System (ADS)

    Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Deming, Drake

    2017-10-01

    Measured exoplanet transit and eclipse depths can vary significantly depending on the methodology used, especially at the low S/N levels in Spitzer eclipses. BiLinearly Interpolated Subpixel Sensitivity (BLISS) models a physical, spatial effect, which is independent of any astrophysical effects. Pixel-Level Decorrelation (PLD) uses the relative variations in pixels near the target to correct for flux variations due to telescope motion. PLD is being widely applied to all Spitzer data without a thorough understanding of its behavior. It is a mathematical method derived from a Taylor expansion, and many of its parameters do not have a physical basis. PLD also relies heavily on binning the data to remove short time-scale variations, which can artifically smooth the data. We applied both methods to 4 eclipse observations of WASP-29b, a Saturn-sized planet, which was observed twice with the 3.6 µm and twice with the 4.5 µm channels of Spitzer's IRAC in 2010, 2011 and 2014 (programs 60003, 70084, and 10054, respectively). We compare the resulting eclipse depths and midpoints from each model, assess each method's ability to remove correlated noise, and discuss how to choose or combine the best data analysis methods. We also refined the orbit from eclipse timings, detecting a significant nonzero eccentricity, and we used our Bayesian Atmospheric Radiative Transfer (BART) code to retrieve the planet's atmosphere, which is consistent with a blackbody. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  7. Merging of rain gauge and radar data for urban hydrological modelling

    NASA Astrophysics Data System (ADS)

    Berndt, Christian; Haberlandt, Uwe

    2015-04-01

    Urban hydrological processes are generally characterised by short response times and therefore rainfall data with a high resolution in space and time are required for their modelling. In many smaller towns, no recordings of rainfall data exist within the urban catchment. Precipitation radar helps to provide extensive rainfall data with a temporal resolution of five minutes, but the rainfall amounts can be highly biased and hence the data should not be used directly as a model input. However, scientists proposed several methods for adjusting radar data to station measurements. This work tries to evaluate rainfall inputs for a hydrological model regarding the following two different applications: Dimensioning of urban drainage systems and analysis of single event flow. The input data used for this analysis can be divided into two groups: Methods, which rely on station data only (Nearest Neighbour Interpolation, Ordinary Kriging), and methods, which incorporate station as well as radar information (Conditional Merging, Bias correction of radar data based on quantile mapping with rain gauge recordings). Additionally, rainfall intensities that were directly obtained from radar reflectivities are used. A model of the urban catchment of the city of Brunswick (Lower Saxony, Germany) is utilised for the evaluation. First results show that radar data cannot help with the dimensioning task of sewer systems since rainfall amounts of convective events are often overestimated. Gauges in catchment proximity can provide more reliable rainfall extremes. Whether radar data can be helpful to simulate single event flow depends strongly on the data quality and thus on the selected event. Ordinary Kriging is often not suitable for the interpolation of rainfall data in urban hydrology. This technique induces a strong smoothing of rainfall fields and therefore a severe underestimation of rainfall intensities for convective events.

  8. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  9. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  10. Fabrication and Characterization of Polyvinylidene Fluoride Microfilms for Microfluidic Applications

    NASA Astrophysics Data System (ADS)

    Rao, Yammani Venkat Subba; Raghavan, Aravinda Narayanan; Viswanathan, Meenakshi

    2016-10-01

    The ability to create patterns of piezo responsive material on smooth substrate is an important method to develop efficient microfluidic mixers. This paper reports the fabrication of Poly vinylidene fluoride microfilms using spin-coating on smooth glass surface. The suitable crystalline phases, surface morphology and microstructural properties of the PVDF films have been investigated. We found that films of average thickness 10μm, had average roughness of 0.13μm. These PVDF films are useful in microfluidic mixer applications.

  11. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  12. High friction on ice provided by elastomeric fiber composites with textured surfaces

    NASA Astrophysics Data System (ADS)

    Rizvi, R.; Naguib, H.; Fernie, G.; Dutta, T.

    2015-03-01

    Two main applications requiring high friction on ice are automobile tires and footwear. The main motivation behind the use of soft rubbers in these applications is the relatively high friction force generated between a smooth rubber contacting smooth ice. Unfortunately, the friction force between rubber and ice is very low at temperatures near the melting point of ice and as a result we still experience automobile accidents and pedestrian slips and falls in the winter. Here, we report on a class of compliant fiber-composite materials with textured surfaces that provide outstanding coefficients of friction on wet ice. The fibrous composites consist of a hard glass-fiber phase reinforcing a compliant thermoplastic polyurethane matrix. The glass-fiber phase is textured such that it is aligned transversally and protruding out of the elastomer surface. Our analysis indicates that the exposed fiber phase exhibits a "micro-cleat" effect, allowing for it to fracture the ice and increase the interfacial contact area thereby requiring a high force to shear the interface.

  13. An investigation of the rapid spectral variability of the AP stars 53 Cam, 41 Tau, and Beta CrB on the basis of the K CA II and H-delta lines

    NASA Astrophysics Data System (ADS)

    Kuvshinov, V. M.; Plachinda, S. I.

    The flux is shown to vary with the phase of the period of axial rotation. For 53 Cam and Beta CrB, this variability is smooth and is well correlated with the intensity of the mean surface magnetic field. It is pointed out that in the case of 53 Cam, an analogous dependence between the equivalent width of the K Ca II line and the phase of the period of rotation was obtained by Faraggiana (1973). It is considered significant that the correlations between the flux in the K line and the intensity of the mean surface magnetic field for 53 Cam and Beta CrB have the same sign. With 41 Tau, as with the effective magnetic field, no smooth relationship is found between the fluxes in the K Ca II and H-delta lines and the phase of the period of rotation.

  14. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  15. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  16. Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Niu, Xiamu

    2016-01-01

    This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.

  17. Factors affecting basket catheter detection of real and phantom rotors in the atria: A computational study.

    PubMed

    Martinez-Mateu, Laura; Romero, Lucia; Ferrer-Albero, Ana; Sebastian, Rafael; Rodríguez Matas, José F; Jalife, José; Berenfeld, Omer; Saiz, Javier

    2018-03-01

    Anatomically based procedures to ablate atrial fibrillation (AF) are often successful in terminating paroxysmal AF. However, the ability to terminate persistent AF remains disappointing. New mechanistic approaches use multiple-electrode basket catheter mapping to localize and target AF drivers in the form of rotors but significant concerns remain about their accuracy. We aimed to evaluate how electrode-endocardium distance, far-field sources and inter-electrode distance affect the accuracy of localizing rotors. Sustained rotor activation of the atria was simulated numerically and mapped using a virtual basket catheter with varying electrode densities placed at different positions within the atrial cavity. Unipolar electrograms were calculated on the entire endocardial surface and at each of the electrodes. Rotors were tracked on the interpolated basket phase maps and compared with the respective atrial voltage and endocardial phase maps, which served as references. Rotor detection by the basket maps varied between 35-94% of the simulation time, depending on the basket's position and the electrode-to-endocardial wall distance. However, two different types of phantom rotors appeared also on the basket maps. The first type was due to the far-field sources and the second type was due to interpolation between the electrodes; increasing electrode density decreased the incidence of the second but not the first type of phantom rotors. In the simulations study, basket catheter-based phase mapping detected rotors even when the basket was not in full contact with the endocardial wall, but always generated a number of phantom rotors in the presence of only a single real rotor, which would be the desired ablation target. Phantom rotors may mislead and contribute to failure in AF ablation procedures.

  18. Factors affecting basket catheter detection of real and phantom rotors in the atria: A computational study

    PubMed Central

    Romero, Lucia; Rodríguez Matas, José F.; Berenfeld, Omer; Saiz, Javier

    2018-01-01

    Anatomically based procedures to ablate atrial fibrillation (AF) are often successful in terminating paroxysmal AF. However, the ability to terminate persistent AF remains disappointing. New mechanistic approaches use multiple-electrode basket catheter mapping to localize and target AF drivers in the form of rotors but significant concerns remain about their accuracy. We aimed to evaluate how electrode-endocardium distance, far-field sources and inter-electrode distance affect the accuracy of localizing rotors. Sustained rotor activation of the atria was simulated numerically and mapped using a virtual basket catheter with varying electrode densities placed at different positions within the atrial cavity. Unipolar electrograms were calculated on the entire endocardial surface and at each of the electrodes. Rotors were tracked on the interpolated basket phase maps and compared with the respective atrial voltage and endocardial phase maps, which served as references. Rotor detection by the basket maps varied between 35–94% of the simulation time, depending on the basket’s position and the electrode-to-endocardial wall distance. However, two different types of phantom rotors appeared also on the basket maps. The first type was due to the far-field sources and the second type was due to interpolation between the electrodes; increasing electrode density decreased the incidence of the second but not the first type of phantom rotors. In the simulations study, basket catheter-based phase mapping detected rotors even when the basket was not in full contact with the endocardial wall, but always generated a number of phantom rotors in the presence of only a single real rotor, which would be the desired ablation target. Phantom rotors may mislead and contribute to failure in AF ablation procedures. PMID:29505583

  19. Modeling the direct sun component in buildings using matrix algebraic approaches: Methods and validation

    DOE PAGES

    Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory

    2017-12-23

    Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less

  20. Modeling the direct sun component in buildings using matrix algebraic approaches: Methods and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Eleanor S.; Geisler-Moroder, David; Ward, Gregory

    Simulation tools that enable annual energy performance analysis of optically-complex fenestration systems have been widely adopted by the building industry for use in building design, code development, and the development of rating and certification programs for commercially-available shading and daylighting products. The tools rely on a three-phase matrix operation to compute solar heat gains, using as input low-resolution bidirectional scattering distribution function (BSDF) data (10–15° angular resolution; BSDF data define the angle-dependent behavior of light-scattering materials and systems). Measurement standards and product libraries for BSDF data are undergoing development to support solar heat gain calculations. Simulation of other metrics suchmore » as discomfort glare, annual solar exposure, and potentially thermal discomfort, however, require algorithms and BSDF input data that more accurately model the spatial distribution of transmitted and reflected irradiance or illuminance from the sun (0.5° resolution). This study describes such algorithms and input data, then validates the tools (i.e., an interpolation tool for measured BSDF data and the five-phase method) through comparisons with ray-tracing simulations and field monitored data from a full-scale testbed. Simulations of daylight-redirecting films, a micro-louvered screen, and venetian blinds using variable resolution, tensor tree BSDF input data derived from interpolated scanning goniophotometer measurements were shown to agree with field monitored data to within 20% for greater than 75% of the measurement period for illuminance-based performance parameters. The three-phase method delivered significantly less accurate results. We discuss the ramifications of these findings on industry and provide recommendations to increase end user awareness of the current limitations of existing software tools and BSDF product libraries.« less

Top