Sample records for powerful interpolation technique

  1. Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations

    DTIC Science & Technology

    2008-02-01

    Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates

  2. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  3. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  4. A data base and analysis program for shuttle main engine dynamic pressure measurements

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base management system is described for measurements obtained from space shuttle main engine (SSME) hot firing tests. The data were provided in terms of engine power level and rms pressure time histories, and power spectra of the dynamic pressure measurements at selected times during each test. Test measurements and engine locations are defined along with a discussion of data acquisition and reduction procedures. A description of the data base management analysis system is provided and subroutines developed for obtaining selected measurement means, variances, ranges and other statistics of interest are discussed. A summary of pressure spectra obtained at SSME rated power level is provided for reference. Application of the singular value decomposition technique to spectrum interpolation is discussed and isoplots of interpolated spectra are presented to indicate measurement trends with engine power level. Program listings of the data base management and spectrum interpolation software are given. Appendices are included to document all data base measurements.

  5. Visual and Statistical Analysis of Digital Elevation Models Generated Using Idw Interpolator with Varying Powers

    NASA Astrophysics Data System (ADS)

    Asal, F. F.

    2012-07-01

    Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical analysis has proven good potential in the DEM quality assessment.

  6. Calculation and interpolation of the characteristics of the hydrodynamic journal bearings in the domain of possible movements of the rotor journals

    NASA Astrophysics Data System (ADS)

    Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.

    2016-10-01

    To visualize the physical processes that occur in the journal bearings of the shafting of power generating turbosets, a technique for preliminary calculation of a set of characteristics of the journal bearings in the domain of possible movements (DPM) of the rotor journals is proposed. The technique is based on interpolation of the oil film characteristics and is designed for use in real-time diagnostic system COMPACS®. According to this technique, for each journal bearing, the domain of possible movement of the shaft journal is computed, then triangulation of the area is performed, and the corresponding mesh is constructed. At each node of the mesh, all characteristics of the journal bearing required by the diagnostic system are calculated. Via shaft-position sensors, the system measures—in the online mode—the instantaneous location of the shaft journal in the bearing and determines the averaged static position of the journals (the pivoting vector). Afterwards, continuous interpolation in the triangulation domain is performed, which allows the real-time calculation of the static and dynamic forces that act on the rotor journal, the flow rate and the temperature of the lubricant, and power friction losses. Use of the proposed method on a running turboset enables diagnosing the technical condition of the shafting support system and promptly identifying the defects that determine the vibrational state and the overall reliability of the turboset. The authors report a number of examples of constructing the DPM and computing the basic static characteristics for elliptical journal bearings typical of large-scale power turbosets. To illustrate the interpolation method, the traditional approach to calculation of bearing properties is applied. This approach is based on a Reynolds two-dimensional isothermal equation that accounts for the mobility of the boundary of the oil film continuity.

  7. Definition and verification of a complex aircraft for aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1986-01-01

    Techniques are reviewed which are of value in CAD/CAM CFD studies of the geometries of new fighter aircraft. In order to refine the computations of the flows to take advantage of the computing power available from supercomputers, it is often necessary to interpolate the geometry of the mesh selected for the numerical analysis of the aircraft shape. Interpolating the geometry permits a higher level of detail in calculations of the flow past specific regions of a design. A microprocessor-based mathematics engine is described for fast image manipulation and rotation to verify that the interpolated geometry will correspond to the design geometry in order to ensure that the flow calculations will remain valid through the interpolation. Applications of the image manipulation system to verify geometrical representations with wire-frame and shaded-surface images are described.

  8. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  9. Programming an Artificial Neural Network Tool for Spatial Interpolation in GIS - A Case Study for Indoor Radio Wave Propagation of WLAN.

    PubMed

    Sen, Alper; Gümüsay, M Umit; Kavas, Aktül; Bulucu, Umut

    2008-09-25

    Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN.

  10. Programming an Artificial Neural Network Tool for Spatial Interpolation in GIS - A Case Study for Indoor Radio Wave Propagation of WLAN

    PubMed Central

    Şen, Alper; Gümüşay, M. Ümit; Kavas, Aktül; Bulucu, Umut

    2008-01-01

    Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN. PMID:27873854

  11. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  12. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  14. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  15. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  16. Noise reduction techniques for Bayer-matrix images

    NASA Astrophysics Data System (ADS)

    Kalevo, Ossi; Rantanen, Henry

    2002-04-01

    In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.

  17. Spatial interpolation techniques using R

    EPA Science Inventory

    Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...

  18. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  19. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  20. Nearest neighbor, bilinear interpolation and bicubic interpolation geographic correction effects on LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.

  1. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  2. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  3. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  4. Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90 nm CMOS process.

    PubMed

    Chen, Tung-Chien; Ma, Tsung-Chuan; Chen, Yun-Yu; Chen, Liang-Gee

    2012-01-01

    Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power.

  5. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  6. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  7. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  8. A Comparison of Simple Methods to Incorporate Material Temperature Dependency in the Green's Function Method for Estimating Transient Thermal Stresses in Thick-Walled Power Plant Components.

    PubMed

    Rouse, James; Hyde, Christopher

    2016-01-06

    The threat of thermal fatigue is an increasing concern for thermal power plant operators due to the increasing tendency to adopt "two-shifting" operating procedures. Thermal plants are likely to remain part of the energy portfolio for the foreseeable future and are under societal pressures to generate in a highly flexible and efficient manner. The Green's function method offers a flexible approach to determine reference elastic solutions for transient thermal stress problems. In order to simplify integration, it is often assumed that Green's functions (derived from finite element unit temperature step solutions) are temperature independent (this is not the case due to the temperature dependency of material parameters). The present work offers a simple method to approximate a material's temperature dependency using multiple reference unit solutions and an interpolation procedure. Thermal stress histories are predicted and compared for realistic temperature cycles using distinct techniques. The proposed interpolation method generally performs as well as (if not better) than the optimum single Green's function or the previously-suggested weighting function technique (particularly for large temperature increments). Coefficients of determination are typically above 0 . 96 , and peak stress differences between true and predicted datasets are always less than 10 MPa.

  9. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  10. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  11. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  12. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  13. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  14. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  15. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  16. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  17. Quantum interpolation for high-resolution sensing

    PubMed Central

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-01-01

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy. PMID:28196889

  18. Quantum interpolation for high-resolution sensing.

    PubMed

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-02-28

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy.

  19. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    PubMed

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Campbell, L.; Purviance, J.

    1992-01-01

    A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.

  1. Turbine adapted maps for turbocharger engine matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tancrez, M.; Galindo, J.; Guardiola, C.

    2011-01-15

    This paper presents a new representation of the turbine performance maps oriented for turbocharger characterization. The aim of this plot is to provide a more compact and suited form to implement in engine simulation models and to interpolate data from turbocharger test bench. The new map is based on the use of conservative parameters as turbocharger power and turbine mass flow to describe the turbine performance in all VGT positions. The curves obtained are accurately fitted with quadratic polynomials and simple interpolation techniques give reliable results. Two turbochargers characterized in an steady flow rig were used for illustrating the representation.more » After being implemented in a turbocharger submodel, the results obtained with the model have been compared with success against turbine performance evaluated in engine tests cells. A practical application in turbocharger matching is also provided to show how this new map can be directly employed in engine design. (author)« less

  2. Narrowband signal detection in the SETI field test

    NASA Technical Reports Server (NTRS)

    Cullers, D. Kent; Deans, Stanley R.

    1986-01-01

    Various methods for detecting narrow-band signals are evaluated. The characteristics of synchronized and unsynchronized pulses are examined. Synchronous, square law, regular pulse, and the general form detections are discussed. The CW, single pulse, synchronous, and four pulse detections are analyzed in terms of false alarm rate and threshold relative to average noise power. Techniques for saving memory and retaining sensitivity are described. Consideration is given to nondrifting CW detection, asynchronous pulse detection, interpolative and extrapolative pulse detectors, and finite and infinite pulses.

  3. Synergistic Effects of Minefields and Covering Fire (SEMAC) Computer Model. Volume 2. Analyst’s Manual

    DTIC Science & Technology

    1976-08-01

    target type. A variable used to store the value of IT. UNITS none none none PK PKK POWER PPEAF(7) The square of the range at which the...targets in the column to provide for removal of the three damaged targets. A standard linear interpolation technique is used to pro - vide for the...considered to be shielded by terrain features. provided into which are pro - priority Whereas intruder/mine encounters are position (or dis- tance

  4. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  5. Nonanalytic function generation routines for 16-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.; Shaufl, M.

    1980-01-01

    Interpolation techniques for three types (univariate, bivariate, and map) of nonanalytic functions are described. These interpolation techniques are then implemented in scaled fraction arithmetic on a representative 16 bit microprocessor. A FORTRAN program is described that facilitates the scaling, documentation, and organization of data for use by these routines. Listings of all these programs are included in an appendix.

  6. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  7. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  8. A statistically defined anthropomorphic software breast phantom.

    PubMed

    Lau, Beverly A; Reiser, Ingrid; Nishikawa, Robert M; Bakic, Predrag R

    2012-06-01

    Digital anthropomorphic breast phantoms have emerged in the past decade because of recent advances in 3D breast x-ray imaging techniques. Computer phantoms in the literature have incorporated power-law noise to represent glandular tissue and branching structures to represent linear components such as ducts. When power-law noise is added to those phantoms in one piece, the simulated fibroglandular tissue is distributed randomly throughout the breast, resulting in dense tissue placement that may not be observed in a real breast. The authors describe a method for enhancing an existing digital anthropomorphic breast phantom by adding binarized power-law noise to a limited area of the breast. Phantoms with (0.5 mm)(3) voxel size were generated using software developed by Bakic et al. Between 0% and 40% of adipose compartments in each phantom were replaced with binarized power-law noise (β = 3.0) ranging from 0.1 to 0.6 volumetric glandular fraction. The phantoms were compressed to 7.5 cm thickness, then blurred using a 3 × 3 boxcar kernel and up-sampled to (0.1 mm)(3) voxel size using trilinear interpolation. Following interpolation, the phantoms were adjusted for volumetric glandular fraction using global thresholding. Monoenergetic phantom projections were created, including quantum noise and simulated detector blur. Texture was quantified in the simulated projections using power-spectrum analysis to estimate the power-law exponent β from 25.6 × 25.6 mm(2) regions of interest. Phantoms were generated with total volumetric glandular fraction ranging from 3% to 24%. Values for β (averaged per projection view) were found to be between 2.67 and 3.73. Thus, the range of textures of the simulated breasts covers the textures observed in clinical images. Using these new techniques, digital anthropomorphic breast phantoms can be generated with a variety of glandular fractions and patterns. β values for this new phantom are comparable with published values for breast tissue in x-ray projection modalities. The combination of conspicuous linear structures and binarized power-law noise added to a limited area of the phantom qualitatively improves its realism. © 2012 American Association of Physicists in Medicine.

  9. View-interpolation of sparsely sampled sinogram using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Lee, Jongha; Cho, Suengryong

    2017-02-01

    Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

  10. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    NASA Astrophysics Data System (ADS)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  11. Acousto-optic time- and space-integrating spotlight-mode SAR processor

    NASA Astrophysics Data System (ADS)

    Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.

    1993-09-01

    The technical approach and recent experimental results for the acousto-optic time- and space- integrating real-time SAR image formation processor program are reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results include a demonstration of the processor's ability to perform high-resolution spotlight-mode SAR imaging by simultaneously compensating for range migration and range/azimuth coupling in the analog optical domain, thereby avoiding a highly power-consuming digital interpolation or reformatting operation usually required in all-electronic approaches.

  12. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  13. Signal-to-noise ratio estimation using adaptive tuning on the piecewise cubic Hermite interpolation model for images.

    PubMed

    Sim, K S; Yeap, Z X; Tso, C P

    2016-11-01

    An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  14. Suitability of Spatial Interpolation Techniques in Varying Aquifer Systems of a Basaltic Terrain for Monitoring Groundwater Availability

    NASA Astrophysics Data System (ADS)

    Katpatal, Y. B.; Paranjpe, S. V.; Kadu, M. S.

    2017-12-01

    Geological formations act as aquifer systems and variability in the hydrological properties of aquifers have control over groundwater occurrence and dynamics. To understand the groundwater availability in any terrain, spatial interpolation techniques are widely used. It has been observed that, with varying hydrogeological conditions, even in a geologically homogenous set up, there are large variations in observed groundwater levels. Hence, the accuracy of groundwater estimation depends on the use of appropriate interpretation techniques. The study area of the present study is Venna Basin of Maharashtra State, India which is a basaltic terrain with four different types of basaltic layers laid down horizontally; weathered vesicular basalt, weathered and fractured basalt, highly weathered unclassified basalt and hard massive basalt. The groundwater levels vary with topography as different types of basalts are present at varying depths. The local stratigraphic profiles were generated at different types of basaltic terrains. The present study aims to interpolate the groundwater levels within the basin and to check the co-relation between the estimated and the observed values. The groundwater levels for 125 observation wells situated in these different basaltic terrains for 20 years (1995 - 2015) have been used in the study. The interpolation was carried out in Geographical Information System (GIS) using ordinary kriging and Inverse Distance Weight (IDW) method. A comparative analysis of the interpolated values of groundwater levels is carried out for validating the recorded groundwater level dataset. The results were co-related to various types of basaltic terrains present in basin forming the aquifer systems. Mean Error (ME) and Mean Square Errors (MSE) have been computed and compared. It was observed that within the interpolated values, a good correlation does not exist between the two interpolation methods used. The study concludes that in crystalline basaltic terrain, interpolation methods must be verified with the changes in the geological profiles.

  15. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  16. Traffic volume estimation using network interpolation techniques.

    DOT National Transportation Integrated Search

    2013-12-01

    Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...

  17. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  18. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  19. Novel Digital Signal Processing and Detection Techniques.

    DTIC Science & Technology

    1980-09-01

    decimation and interpolation [11, 1 2]. * Submitted by: Bede Liu Department of Electrical .l Engineering and Computer Science Princeton University ...on the use of recursive filters for decimation and interpolation. 4- UNCL.ASSIFIED~ SECURITY CLASSIFICATION OF PAGEfW1,en Data Fneprd) ...filter structure for realizing low-pass filter is developed 16,7]. By employing decimation and interpolation, the filter uses only coefficients 0, +1, and

  20. An objective isobaric/isentropic technique for upper air analysis

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Endlich, R. M.; Ehernberger, L. J.

    1981-01-01

    An objective meteorological analysis technique is presented whereby both horizontal and vertical upper air analyses are performed. The process used to interpolate grid-point values from the upper-air station data is the same as for grid points on both an isobaric surface and a vertical cross-sectional plane. The nearby data surrounding each grid point are used in the interpolation by means of an anisotropic weighting scheme, which is described. The interpolation for a grid-point potential temperature is performed isobarically; whereas wind, mixing-ratio, and pressure height values are interpolated from data that lie on the isentropic surface that passes through the grid point. Two versions (A and B) of the technique are evaluated by qualitatively comparing computer analyses with subjective handdrawn analyses. The objective products of version A generally have fair correspondence with the subjective analyses and with the station data, and depicted the structure of the upper fronts, tropopauses, and jet streams fairly well. The version B objective products correspond more closely to the subjective analyses, and show the same strong gradients across the upper front with only minor smoothing.

  1. Application of Gaussian Elimination to Determine Field Components within Unmeasured Regions in the UCN τ Trap

    NASA Astrophysics Data System (ADS)

    Felkins, Joseph; Holley, Adam

    2017-09-01

    Determining the average lifetime of a neutron gives information about the fundamental parameters of interactions resulting from the charged weak current. It is also an input for calculations of the abundance of light elements in the early cosmos, which are also directly measured. Experimentalists have devised two major approaches to measure the lifespan of the neutron, the beam experiment, and the bottle experiment. For the bottle experiment, I have designed a computational algorithm based on a numerical technique that interpolates magnetic field values in between measured points. This algorithm produces interpolated fields that satisfy the Maxwell-Heaviside equations for use in a simulation that will investigate the rate of depolarization in magnetic traps used for bottle experiments, such as the UCN τ experiment at Los Alamos National Lab. I will present how UCN depolarization can cause a systematic error in experiments like UCN τ. I will then describe the technique that I use for the interpolation, and will discuss the accuracy of interpolation for changes with the number of measured points and the volume of the interpolated region. Supported by NSF Grant 1553861.

  2. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  3. Soft bilateral filtering volumetric shadows using cube shadow maps

    PubMed Central

    Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang

    2017-01-01

    Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740

  4. Color characterization of cine film

    NASA Astrophysics Data System (ADS)

    Noriega, Leonardo; Morovic, Jan; MacDonald, Lindsay W.; Lempp, Wolfgang

    2002-06-01

    This paper describes the characterization of cine film, by identifying the relationship between the Status A density values of positive print film and the XYZ values of conventional colorimetry. Several approaches are tried including least-squares modeling, tetrahedral interpolation, and distance weighted interpolation. The distance weighted technique has been improved by the use of the Mahalanobis distance metric in order to perform the interpolation, and this is presented as an innovation.

  5. Power transformations improve interpolation of grids for molecular mechanics interaction energies.

    PubMed

    Minh, David D L

    2018-02-18

    A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  6. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  7. Interpolated twitches in fatiguing single mouse muscle fibres: implications for the assessment of central fatigue

    PubMed Central

    Place, Nicolas; Yamada, Takashi; Bruton, Joseph D; Westerblad, Håkan

    2008-01-01

    An electrically evoked twitch during a maximal voluntary contraction (twitch interpolation) is frequently used to assess central fatigue. In this study we used intact single muscle fibres to determine if intramuscular mechanisms could affect the force increase with the twitch interpolation technique. Intact single fibres from flexor digitorum brevis of NMRI mice were dissected and mounted in a chamber equipped with a force transducer. Free myoplasmic [Ca2+] ([Ca2+]i) was measured with the fluorescent Ca2+ indicator indo-1. Seven fibres were fatigued with repeated 70 Hz tetani until 40% initial force with an interpolated pulse evoked every fifth tetanus. Results showed that the force generated by the interpolated twitch increased throughout fatigue, being 9 ± 1% of tetanic force at the start and 19 ± 1% at the end (P < 0.001). This was not due to a larger increase in [Ca2+]i induced by the interpolated twitch during fatigue but rather to the fact that the force–[Ca2+]i relationship is sigmoidal and fibres entered a steeper part of the relationship during fatigue. In another set of experiments, we observed that repeated tetani evoked at 150 Hz resulted in more rapid fatigue development than at 70 Hz and there was a decrease in force (‘sag’) during contractions, which was not observed at 70 Hz. In conclusion, the extent of central fatigue is difficult to assess and it may be overestimated when using the twitch interpolation technique. PMID:18403421

  8. The analysis of composite laminated beams using a 2D interpolating meshless technique

    NASA Astrophysics Data System (ADS)

    Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.

    2018-02-01

    Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.

  9. False colors removal on the YCr-Cb color space

    NASA Astrophysics Data System (ADS)

    Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe

    2009-01-01

    Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.

  10. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  11. Estimation of lunar surface maturity and ferrous oxide from Moon Mineralogy Mapper (M3) data through data interpolation techniques

    NASA Astrophysics Data System (ADS)

    Ajith Kumar, P.; Kumar, Shashi

    2016-04-01

    Surface maturity estimation of the lunar regolith revealed selenological process behind the formation of lunar surface, which might be provided vital information regarding the geological evolution of earth, because lunar surface is being considered as 8-9 times older than as that of the earth. Spectral reflectances data from Moon mineralogy mapper (M3), the hyperspectral sensor of chandrayan-1 coupled with the standard weight percentages of FeO from lunar returned samples of Apollo and Luna landing sites, through data interpolation techniques to generate the weight percentage FeO map of the target lunar locations. With the interpolated data mineral maps were prepared and the results are analyzed.

  12. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  13. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  14. An operator calculus for surface and volume modeling

    NASA Technical Reports Server (NTRS)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  15. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    NASA Astrophysics Data System (ADS)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.

  16. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  17. Ab initio potential-energy surfaces for complex, multichannel systems using modified novelty sampling and feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.

    2005-02-01

    A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.

  18. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  19. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  20. Generalized Reduced Order Modeling of Aeroservoelastic Systems

    NASA Astrophysics Data System (ADS)

    Gariffo, James Michael

    Transonic aeroelastic and aeroservoelastic (ASE) modeling presents a significant technical and computational challenge. Flow fields with a mixture of subsonic and supersonic flow, as well as moving shock waves, can only be captured through high-fidelity CFD analysis. With modern computing power, it is realtively straightforward to determine the flutter boundary for a single structural configuration at a single flight condition, but problems of larger scope remain quite costly. Some such problems include characterizing a vehicle's flutter boundary over its full flight envelope, optimizing its structural weight subject to aeroelastic constraints, and designing control laws for flutter suppression. For all of these applications, reduced-order models (ROMs) offer substantial computational savings. ROM techniques in general have existed for decades, and the methodology presented in this dissertation builds on successful previous techniques to create a powerful new scheme for modeling aeroelastic systems, and predicting and interpolating their transonic flutter boundaries. In this method, linear ASE state-space models are constructed from modal structural and actuator models coupled to state-space models of the linearized aerodynamic forces through feedback loops. Flutter predictions can be made from these models through simple eigenvalue analysis of their state-transition matrices for an appropriate set of dynamic pressures. Moreover, this analysis returns the frequency and damping trend of every aeroelastic branch. In contrast, determining the critical dynamic pressure by direct time-marching CFD requires a separate run for every dynamic pressure being analyzed simply to obtain the trend for the critical branch. The present ROM methodology also includes a new model interpolation technique that greatly enhances the benefits of these ROMs. This enables predictions of the dynamic behavior of the system for flight conditions where CFD analysis has not been explicitly performed, thus making it possible to characterize the overall flutter boundary with far fewer CFD runs. A major challenge of this research is that transonic flutter boundaries can involve multiple unstable modes of different types. Multiple ROM-based studies on the ONERA M6 wing are shown indicating that in addition to classic bending-torsion (BT) flutter modes. which become unstable above a threshold dynamic pressure after two natural modes become aerodynamically coupled, some natural modes are able to extract energy from the air and become unstable by themselves. These single-mode instabilities tend to be weaker than the BT instabilities, but have near-zero flutter boundaries (exactly zero in the absence of structural damping). Examples of hump modes, which behave like natural mode instabilities before stabilizing, are also shown, as are cases where multiple instabilities coexist at a single flight condition. The result of all these instabilities is a highly sensitive flutter boundary, where small changes in Mach number, structural stiffness, and structural damping can substantially alter not only the stability of individual aeroelastic branches, but also which branch is critical. Several studies are shown presenting how the flutter boundary varies with respect to all three of these parameters, as well as the number of structural modes used to construct the ROMs. Finally, an investigation of the effectiveness and limitations of the interpolation scheme is presented. It is found that in regions where the flutter boundary is relatively smooth, the interpolation method produces ROMs that predict the flutter characteristics of the corresponding directly computed models to a high degree of accuracy, even for relatively coarsely spaced data. On the other hand, in the transonic dip region, the interpolated ROMs show significant errors at points where the boundary changes rapidly; however, they still give a good qualitative estimate of where the largest jumps occur.

  1. Interpolation/extrapolation technique with application to hypervelocity impact of space debris

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1992-01-01

    A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.

  2. [Monitoring the thermal plume from coastal nuclear power plant using satellite remote sensing data: modeling, and validation].

    PubMed

    Zhu, Li; Zhao, Li-Min; Wang, Qiao; Zhang, Ai-Ling; Wu, Chuan-Qing; Li, Jia-Guo; Shi, Ji-Xiang

    2014-11-01

    Thermal plume from coastal nuclear power plant is a small-scale human activity, mornitoring of which requires high-frequency and high-spatial remote sensing data. The infrared scanner (IRS), on board of HJ-1B, has an infrared channel IRS4 with 300 m and 4-days as its spatial and temporal resolution. Remote sensing data aquired using IRS4 is an available source for mornitoring thermal plume. Retrieval pattern for coastal sea surface temperature (SST) was built to monitor the thermal plume from nuclear power plant. The research area is located near Guangdong Daya Bay Nuclear Power Station (GNPS), where synchronized validations were also implemented. The National Centers for Environmental Prediction (NCEP) data was interpolated spatially and temporally. The interpolated data as well as surface weather conditions were subsequently employed into radiative transfer model for the atmospheric correction of IRS4 thermal image. A look-up-table (LUT) was built for the inversion between IRS4 channel radiance and radiometric temperature, and a fitted function was also built from the LUT data for the same purpose. The SST was finally retrieved based on those preprocessing procedures mentioned above. The bulk temperature (BT) of 84 samples distributed near GNPS was shipboard collected synchronically using salinity-temperature-deepness (CTD) instruments. The discrete sample data was surface interpolated and compared with the satellite retrieved SST. Results show that the average BT over the study area is 0.47 degrees C higher than the retrieved skin temperature (ST). For areas far away from outfall, the ST is higher than BT, with differences less than 1.0 degrees C. The main driving force for temperature variations in these regions is solar radiation. For areas near outfall, on the contrary, the retrieved ST is lower than BT, and greater differences between the two (meaning > 1.0 degrees C) happen when it gets closer to the outfall. Unlike the former case, the convective heat transfer resulting from the thermal plume is the primary reason leading to the temperature variations. Temperature rising (TR) distributions obtained from remote sensing data and in-situ measurements are consistent, except that the interpolated BT shows more level details (> 5 levels) than that of the ST (up to 4 levels). The areas with higher TR levels (> 2) are larger on BT maps, while for lower TR levels (≤ 2), the two methods perform with no obvious differences. Minimal errors for satellite-derived SST occur regularly around local time 10 a. m. This makes the remote sensing results to be substitutes for in-situ measurements. Therefore, for operational applications of HJ-1B IRS4, remote sensing technique can be a practical approach to monitoring the nuclear plant thermal pollution around this time period.

  3. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  4. Precipitation interpolation in mountainous areas

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  5. A novel power harmonic analysis method based on Nuttall-Kaiser combination window double spectrum interpolated FFT algorithm

    NASA Astrophysics Data System (ADS)

    Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.

    2017-11-01

    Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.

  6. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  7. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  8. Development and validation of segmentation and interpolation techniques in sinograms for metal artifact suppression in CT.

    PubMed

    Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob

    2010-02-01

    Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.

  9. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  10. Geometric and shading correction for images of printed materials using boundary.

    PubMed

    Brown, Michael S; Tsoi, Yau-Chat

    2006-06-01

    A novel technique that uses boundary interpolation to correct geometric distortion and shading artifacts present in images of printed materials is presented. Unlike existing techniques, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can be used to estimate the intrinsic illumination component of the distorted image to correct shading artifacts. We detail our algorithm for geometric and shading correction and demonstrate its usefulness on real-world and synthetic data.

  11. The analysis of decimation and interpolation in the linear canonical transform domain.

    PubMed

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  12. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  13. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  14. A novel interpolation approach for the generation of 3D-geometric digital bone models from image stacks

    PubMed Central

    Mittag, U.; Kriechbaumer, A.; Rittweger, J.

    2017-01-01

    The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415

  15. Spacecraft Orbit Anomaly Representation Using Thrust-Fourier-Coefficients with Orbit Determination Toolbox

    NASA Astrophysics Data System (ADS)

    Ko, H.; Scheeres, D.

    2014-09-01

    Representing spacecraft orbit anomalies between two separate states is a challenging but an important problem in achieving space situational awareness for an active spacecraft. Incorporation of such a capability could play an essential role in analyzing satellite behaviors as well as trajectory estimation of the space object. A general way to deal with the anomaly problem is to add an estimated perturbing acceleration such as dynamic model compensation (DMC) into an orbit determination process based on pre- and post-anomaly tracking data. It is a time-consuming numerical process to find valid coefficients to compensate for unknown dynamics for the anomaly. Even if the orbit determination filter with DMC can crudely estimate an unknown acceleration, this approach does not consider any fundamental element of the unknown dynamics for a given anomaly. In this paper, a new way of representing a spacecraft anomaly using an interpolation technique with the Thrust-Fourier-Coefficients (TFCs) is introduced and several anomaly cases are studied using this interpolation method. It provides a very efficient way of reconstructing the fundamental elements of the dynamics for a given spacecraft anomaly. Any maneuver performed by a satellite transitioning between two arbitrary orbital states can be represented as an equivalent maneuver using an interpolation technique with the TFCs. Given unconnected orbit states between two epochs due to a spacecraft anomaly, it is possible to obtain a unique control law using the TFCs that is able to generate the desired secular behavior for the given orbital changes. This interpolation technique can capture the fundamental elements of combined unmodeled anomaly events. The interpolated orbit trajectory, using the TFCs compensating for a given anomaly, can be used to improve the quality of orbit fits through the anomaly period and therefore help to obtain a good orbit determination solution after the anomaly. Orbit Determination Toolbox (ODTBX) is modified to adapt this technique in order to verify the performance of this interpolation approach. Spacecraft anomaly cases are based on either single or multiple low or high thrust maneuvers and the unknown thrust accelerations are recovered and compared with the true thrust acceleration. The advantage of this approach is to easily append TFCs and its dynamics to the pre-built ODTBX, which enables us to blend post-anomaly tracking data to improve the performance of the interpolation representation in the absence of detailed information about a maneuver. It allows us to improve space situational awareness in the areas of uncertainty propagation, anomaly characterization and track correlation.

  16. Topological and canonical kriging for design flood prediction in ungauged catchments: an improvement over a traditional regional regression approach?

    USGS Publications Warehouse

    Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.

    2013-01-01

    In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.

  17. Polynomial Interpolation and Sums of Powers of Integers

    ERIC Educational Resources Information Center

    Cereceda, José Luis

    2017-01-01

    In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, P[subscript k](n) and Q[subscript k](n), such that P[subscript k](n) = Q[subscript k](n) = f[subscript k](n) for n = 1, 2,… , k, where f[subscript k](1), f[subscript k](2),… , f[subscript k](k) are k arbitrarily chosen…

  18. Application of Design Methodologies for Feedback Compensation Associated with Linear Systems

    NASA Technical Reports Server (NTRS)

    Smith, Monty J.

    1996-01-01

    The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.

  19. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  20. LPV Controller Interpolation for Improved Gain-Scheduling Control Performance

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Kim, SungWan

    2002-01-01

    In this paper, a new gain-scheduling control design approach is proposed by combining LPV (linear parameter-varying) control theory with interpolation techniques. The improvement of gain-scheduled controllers can be achieved from local synthesis of Lyapunov functions and continuous construction of a global Lyapunov function by interpolation. It has been shown that this combined LPV control design scheme is capable of improving closed-loop performance derived from local performance improvement. The gain of the LPV controller will also change continuously across parameter space. The advantages of the newly proposed LPV control is demonstrated through a detailed AMB controller design example.

  1. Student Support for Research in Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.

    1999-01-01

    Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.

  2. An artificial intelligence tool for complex age-depth models

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; de Vesine, L. R.; Lai, V.; Thomas, M.; Nelson, T. H.; Weiss, I.; White, J. W. C.

    2017-12-01

    CSciBox is an integrated software system for age modeling of paleoenvironmental records. It incorporates an array of data-processing and visualization facilities, ranging from 14C calibrations to sophisticated interpolation tools. Using CSciBox's GUI, a scientist can build custom analysis pipelines by composing these built-in components or adding new ones. Alternatively, she can employ CSciBox's automated reasoning engine, Hobbes, which uses AI techniques to perform an in-depth, autonomous exploration of the space of possible age-depth models and presents the results—both the models and the reasoning that was used in constructing and evaluating them—to the user for her inspection. Hobbes accomplishes this using a rulebase that captures the knowledge of expert geoscientists, which was collected over the course of more than 100 hours of interviews. It works by using these rules to generate arguments for and against different age-depth model choices for a given core. Given a marine-sediment record containing uncalibrated 14C dates, for instance, Hobbes tries CALIB-style calibrations using a choice of IntCal curves, with reservoir age correction values chosen from the 14CHRONO database using the lat/long information provided with the core, and finally composes the resulting age points into a full age model using different interpolation methods. It evaluates each model—e.g., looking for outliers or reversals—and uses that information to guide the next steps of its exploration, and presents the results to the user in human-readable form. The most powerful of CSciBox's built-in interpolation methods is BACON, a Bayesian sedimentation-rate algorithm—a powerful but complex tool that can be difficult to use. Hobbes adjusts BACON's many parameters autonomously to match the age model to the expectations of expert geoscientists, as captured in its rulebase. It then checks the model against the data and iteratively re-calculates until it is a good fit to the data.

  3. An approach to unbiased subsample interpolation for motion tracking.

    PubMed

    McCormick, Matthew M; Varghese, Tomy

    2013-04-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.

  4. The effect of administrative boundaries and geocoding error on cancer rates in California.

    PubMed

    Goldberg, Daniel W; Cockburn, Myles G

    2012-04-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Incorporating Linear Synchronous Transit Interpolation into the Growing String Method: Algorithm and Applications.

    PubMed

    Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin

    2011-12-13

    The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).

  6. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    PubMed

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.

  7. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    PubMed Central

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146

  8. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  9. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  11. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  12. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  14. A panoramic imaging system based on fish-eye lens

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Hao, Chenyang

    2017-10-01

    Panoramic imaging has been closely watched as one of the major technologies of AR and VR. Mainstream panoramic imaging techniques lenses include fish-eye lenses, image splicing, and catadioptric imaging system. Meanwhile, fish-eyes are widely used in the big picture video surveillance. The advantage of fish-eye lenses is that they are easy to operate and cost less, but how to solve the image distortion of fish-eye lenses has always been a very important topic. In this paper, the image calibration algorithm of fish-eye lens is studied by comparing the method of interpolation, bilinear interpolation and double three interpolation, which are used to optimize the images.

  15. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  16. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  17. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    NASA Astrophysics Data System (ADS)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  18. Mechanical analysis of non-uniform bi-directional functionally graded intelligent micro-beams using modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy

    2018-05-01

    This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.

  19. S-Boxes Based on Affine Mapping and Orbit of Power Function

    NASA Astrophysics Data System (ADS)

    Khan, Mubashar; Azam, Naveed Ahmed

    2015-06-01

    The demand of data security against computational attacks such as algebraic, differential, linear and interpolation attacks has been increased as a result of rapid advancement in the field of computation. It is, therefore, necessary to develop such cryptosystems which can resist current cryptanalysis and more computational attacks in future. In this paper, we present a multiple S-boxes scheme based on affine mapping and orbit of the power function used in Advanced Encryption Standard (AES). The proposed technique results in 256 different S-boxes named as orbital S-boxes. Rigorous tests and comparisons are performed to analyse the cryptographic strength of each of the orbital S-boxes. Furthermore, gray scale images are encrypted by using multiple orbital S-boxes. Results and simulations show that the encryption strength of the orbital S-boxes against computational attacks is better than that of the existing S-boxes.

  20. Space Weather Activities of IONOLAB Group: TEC Mapping

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.

    2009-04-01

    Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.

  1. Summary on several key techniques in 3D geological modeling.

    PubMed

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  2. A look-up-table digital predistortion technique for high-voltage power amplifiers in ultrasonic applications.

    PubMed

    Gao, Zheng; Gui, Ping

    2012-07-01

    In this paper, we present a digital predistortion technique to improve the linearity and power efficiency of a high-voltage class-AB power amplifier (PA) for ultrasound transmitters. The system is composed of a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and a field-programmable gate array (FPGA) in which the digital predistortion (DPD) algorithm is implemented. The DPD algorithm updates the error, which is the difference between the ideal signal and the attenuated distorted output signal, in the look-up table (LUT) memory during each cycle of a sinusoidal signal using the least-mean-square (LMS) algorithm. On the next signal cycle, the error data are used to equalize the signal with negative harmonic components to cancel the amplifier's nonlinear response. The algorithm also includes a linear interpolation method applied to the windowed sinusoidal signals for the B-mode and Doppler modes. The measurement test bench uses an arbitrary function generator as the DAC to generate the input signal, an oscilloscope as the ADC to capture the output waveform, and software to implement the DPD algorithm. The measurement results show that the proposed system is able to reduce the second-order harmonic distortion (HD2) by 20 dB and the third-order harmonic distortion (HD3) by 14.5 dB, while at the same time improving the power efficiency by 18%.

  3. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  4. An Approach to Unbiased Subsample Interpolation for Motion Tracking

    PubMed Central

    McCormick, Matthew M.; Varghese, Tomy

    2013-01-01

    Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder–Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique. PMID:23493609

  5. Subcellular localization for Gram positive and Gram negative bacterial proteins using linear interpolation smoothing model.

    PubMed

    Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2015-12-07

    Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Application of validation data for assessing spatial interpolation methods for 8-h ozone or other sparsely monitored constituents.

    PubMed

    Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat

    2013-07-01

    The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Selection of a Geostatistical Method to Interpolate Soil Properties of the State Crop Testing Fields using Attributes of a Digital Terrain Model

    NASA Astrophysics Data System (ADS)

    Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.

    2018-03-01

    The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.

  8. Sandia Higher Order Elements (SHOE) v 0.5 alpha

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    SHOE is research code for characterizing and visualizing higher-order finite elements; it contains a framework for defining classes of interpolation techniques and element shapes; methods for interpolating triangular, quadrilateral, tetrahedral, and hexahedral cells using Lagrange and Legendre polynomial bases of arbitrary order; methods to decompose each element into domains of constant gradient flow (using a polynomial solver to identify critical points); and an isocontouring technique that uses this decomposition to guarantee topological correctness. Please note that this is an alpha release of research software and that some time has passed since it was actively developed; build- and run-time issues likelymore » exist.« less

  9. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  10. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    PubMed

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  11. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  12. A New Method for Computed Tomography Angiography (CTA) Imaging via Wavelet Decomposition-Dependented Edge Matching Interpolation.

    PubMed

    Li, Zeyu; Chen, Yimin; Zhao, Yan; Zhu, Lifeng; Lv, Shengqing; Lu, Jiahui

    2016-08-01

    The interpolation technique of computed tomography angiography (CTA) image provides the ability for 3D reconstruction, as well as reduces the detect cost and the amount of radiation. However, most of the image interpolation algorithms cannot take the automation and accuracy into account. This study provides a new edge matching interpolation algorithm based on wavelet decomposition of CTA. It includes mark, scale and calculation (MSC). Combining the real clinical image data, this study mainly introduces how to search for proportional factor and use the root mean square operator to find a mean value. Furthermore, we re- synthesize the high frequency and low frequency parts of the processed image by wavelet inverse operation, and get the final interpolation image. MSC can make up for the shortage of the conventional Computed Tomography (CT) and Magnetic Resonance Imaging(MRI) examination. The radiation absorption and the time to check through the proposed synthesized image were significantly reduced. In clinical application, it can help doctor to find hidden lesions in time. Simultaneously, the patients get less economic burden as well as less radiation exposure absorbed.

  13. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  14. Adaptive color demosaicing and false color removal

    NASA Astrophysics Data System (ADS)

    Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria

    2010-04-01

    Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.

  15. Air quality mapping using GIS and economic evaluation of health impact for Mumbai City, India.

    PubMed

    Kumar, Awkash; Gupta, Indrani; Brandt, Jørgen; Kumar, Rakesh; Dikshit, Anil Kumar; Patil, Rashmi S

    2016-05-01

    Mumbai, a highly populated city in India, has been selected for air quality mapping and assessment of health impact using monitored air quality data. Air quality monitoring networks in Mumbai are operated by National Environment Engineering Research Institute (NEERI), Maharashtra Pollution Control Board (MPCB), and Brihanmumbai Municipal Corporation (BMC). A monitoring station represents air quality at a particular location, while we need spatial variation for air quality management. Here, air quality monitored data of NEERI and BMC were spatially interpolated using various inbuilt interpolation techniques of ArcGIS. Inverse distance weighting (IDW), Kriging (spherical and Gaussian), and spline techniques have been applied for spatial interpolation for this study. The interpolated results of air pollutants sulfur dioxide (SO2), nitrogen dioxide (NO2) and suspended particulate matter (SPM) were compared with air quality data of MPCB in the same region. Comparison of results showed good agreement for predicted values using IDW and Kriging with observed data. Subsequently, health impact assessment of a ward was carried out based on total population of the ward and air quality monitored data within the ward. Finally, health cost within a ward was estimated on the basis of exposed population. This study helps to estimate the valuation of health damage due to air pollution. Operating more air quality monitoring stations for measurement of air quality is highly resource intensive in terms of time and cost. The appropriate spatial interpolation techniques can be used to estimate concentration where air quality monitoring stations are not available. Further, health impact assessment for the population of the city and estimation of economic cost of health damage due to ambient air quality can help to make rational control strategies for environmental management. The total health cost for Mumbai city for the year 2012, with a population of 12.4 million, was estimated as USD8000 million.

  16. Application of cokriging techniques for the estimation of hail size

    NASA Astrophysics Data System (ADS)

    Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier

    2018-01-01

    There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.

  17. Prediction of soil attributes through interpolators in a deglaciated environment with complex landforms

    NASA Astrophysics Data System (ADS)

    Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto

    2017-04-01

    The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.

  18. Polynomial-interpolation algorithm for van der Pauw Hall measurement in a metal hydride film

    NASA Astrophysics Data System (ADS)

    Koon, D. W.; Ares, J. R.; Leardini, F.; Fernández, J. F.; Ferrer, I. J.

    2008-10-01

    We apply a four-term polynomial-interpolation extension of the van der Pauw Hall measurement technique to a 330 nm Mg-Pd bilayer during both absorption and desorption of hydrogen at room temperature. We show that standard versions of the van der Pauw DC Hall measurement technique produce an error of over 100% due to a drifting offset signal and can lead to unphysical interpretations of the physical processes occurring in this film. The four-term technique effectively removes this source of error, even when the offset signal is drifting by an amount larger than the Hall signal in the time interval between successive measurements. This technique can be used to increase the resolution of transport studies of any material in which the resistivity is rapidly changing, particularly when the material is changing from metallic to insulating behavior.

  19. Summary on Several Key Techniques in 3D Geological Modeling

    PubMed Central

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029

  20. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  1. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  2. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    DTIC Science & Technology

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  3. Validation study of an interpolation method for calculating whole lung volumes and masses from reduced numbers of CT-images in ponies.

    PubMed

    Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A

    2014-12-01

    Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Autoregressive linear least square single scanning electron microscope image signal-to-noise ratio estimation.

    PubMed

    Sim, Kok Swee; NorHisham, Syafiq

    2016-11-01

    A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  5. Mapping wildfire effects on Ca2+ and Mg2+ released from ash. A microplot analisis.

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Úbeda, Xavier; Martin, Deborah

    2010-05-01

    Wildland fires have important implications in ecosystems dynamic. Their effects depends on many biophysical components, mainly burned specie, ecosystem affected, amount and spatial distribution of the fuel, relative humidity, slope, aspect and time of residence. These parameters are heterogenic across the landscape, producing a complex mosaic of severities. Wildland fires have a heterogenic impact on ecosystems due their diverse biophysical features. It is widely known that fire impacts can change rapidly even in short distances, producing at microplot scale highly spatial variation. Also after a fire, the most visible thing is ash and his physical and chemical properties are of main importance because here reside the majority of the available nutrients available to the plants. Considering this idea, is of major importance, study their characteristics in order to observe the type and amount of elements available to plants. This study is focused on the study of the spatial variability of two nutrients essential to plant growth, Ca2+ and Mg2+, released from ash after a wildfire at microplot scale. The impacts of fire are highly variable even small distances. This creates many problems at the hour of map the effects of fire in the release of the studied elements. Hence is of major priority identify the less biased interpolation method in order to predict with great accuracy the variable in study. The aim of this study is map the effects of wildfire on the referred elements released from ash at microplot scale, testing several interpolation methods. 16 interpolation techniques were tested, Inverse Distance to a Weight (IDW), with the with the weights of 1,2, 3, 4 and 5, Local Polynomial, with the power of 1 (LP1) and 2 (LP2), Polynomial Regression (PR), Radial Basis Functions, especially, Spline With Tension (SPT), Completely Regularized Spline (CRS), Multiquadratic (MTQ), Inverse Multiquadratic (MTQ), and Thin Plate Spline (TPS). Also geostatistical methods were tested from Kriging family, mainly Ordinary Kriging (OK), Simple Kriging (SK) and Universal Kriging (UK). Interpolation techniques were assessed throughout the Mean Error (ME) and Root Mean Square (RMSE), obtained from the cross validation procedure calculated in all methods. The fire occurred in Portugal, near an urban area and inside the affected area we designed a grid with the dimensions of 9 x 27 m and we collected 40 samples. Before modelling data, we tested their normality with the Shapiro Wilk test. Since the distributions of Ca2+ and Mg2+ did not respect the gaussian distribution we transformed data logarithmically (Ln). With this transformation, data respect the normality and spatial distribution was modelled with the transformed data. On average in the entire plot the ash slurries contained 4371.01 mg/l of Ca2+, however with a higher coefficient of variation (CV%) of 54.05%. From all the tested methods LP1 was the less biased and hence the most accurate to interpolate this element. The most biased was LP2. In relation to Mg2+, considering the entire plot, the ash released in solution on average 1196.01 mg/l, with a CV% of 52.36%, similar to the identified in Ca2+. The best interpolator in this case was SK and the most biased was LP1 and TPS. Comparing all methods in both elements, the quality of the interpolations was higher in Ca2+. These results allowed us to conclude that to achieve the best prediction it is necessary test a wide range of interpolation methods. The best accuracy will permit us to understand with more precision where the studied elements are more available and accessible to plant growth and ecosystem recovers. This spatial pattern of both nutrients is related with ash pH and burned severity evaluated from ash colour and CaCO3 content. These aspects will be also discussed in the work.

  6. A Novel Stimulus Artifact Removal Technique for High-Rate Electrical Stimulation

    PubMed Central

    Heffer, Leon F; Fallon, James B

    2008-01-01

    Electrical stimulus artifact corrupting electrophysiological recordings often make the subsequent analysis of the underlying neural response difficult. This is particularly evident when investigating short-latency neural activity in response to high-rate electrical stimulation. We developed and evaluated an off-line technique for the removal of stimulus artifact from electrophysiological recordings. Pulsatile electrical stimulation was presented at rates of up to 5000 pulses/s during extracellular recordings of guinea pig auditory nerve fibers. Stimulus artifact was removed by replacing the sample points at each stimulus artifact event with values interpolated along a straight line, computed from neighbouring sample points. This technique required only that artifact events be identifiable and that the artifact duration remained less than both the inter-stimulus interval and the time course of the action potential. We have demonstrated that this computationally efficient sample-and-interpolate technique removes the stimulus artifact with minimal distortion of the action potential waveform. We suggest that this technique may have potential applications in a range of electrophysiological recording systems. PMID:18339428

  7. [Spatial interpolation of soil organic matter using regression Kriging and geographically weighted regression Kriging].

    PubMed

    Yang, Shun-hua; Zhang, Hai-tao; Guo, Long; Ren, Yan

    2015-06-01

    Relative elevation and stream power index were selected as auxiliary variables based on correlation analysis for mapping soil organic matter. Geographically weighted regression Kriging (GWRK) and regression Kriging (RK) were used for spatial interpolation of soil organic matter and compared with ordinary Kriging (OK), which acts as a control. The results indicated that soil or- ganic matter was significantly positively correlated with relative elevation whilst it had a significantly negative correlation with stream power index. Semivariance analysis showed that both soil organic matter content and its residuals (including ordinary least square regression residual and GWR resi- dual) had strong spatial autocorrelation. Interpolation accuracies by different methods were esti- mated based on a data set of 98 validation samples. Results showed that the mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) of RK were respectively 39.2%, 17.7% and 20.6% lower than the corresponding values of OK, with a relative-improvement (RI) of 20.63. GWRK showed a similar tendency, having its ME, MAE and RMSE to be respectively 60.6%, 23.7% and 27.6% lower than those of OK, with a RI of 59.79. Therefore, both RK and GWRK significantly improved the accuracy of OK interpolation of soil organic matter due to their in- corporation of auxiliary variables. In addition, GWRK performed obviously better than RK did in this study, and its improved performance should be attributed to the consideration of sample spatial locations.

  8. Real-Time Fourier Synthesis of Ensembles with Timbral Interpolation

    NASA Astrophysics Data System (ADS)

    Haken, Lippold

    1990-01-01

    In Fourier synthesis, natural musical sounds are produced by summing time-varying sinusoids. Sounds are analyzed to find the amplitude and frequency characteristics for their sinusoids; interpolation between the characteristics of several sounds is used to produce intermediate timbres. An ensemble can be synthesized by summing all the sinusoids for several sounds, but in practice it is difficult to perform such computations in real time. To solve this problem on inexpensive hardware, it is useful to take advantage of the masking effects of the auditory system. By avoiding the computations for perceptually unimportant sinusoids, and by employing other computation reduction techniques, a large ensemble may be synthesized in real time on the Platypus signal processor. Unlike existing computation reduction techniques, the techniques described in this thesis do not sacrifice independent fine control over the amplitude and frequency characteristics of each sinusoid.

  9. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level. © 2011 IEEE

  10. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  11. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  12. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.

  13. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  14. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimationa

    PubMed Central

    Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.

    2011-01-01

    Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029

  15. Health State Monitoring of Bladed Machinery with Crack Growth Detection in BFG Power Plant Using an Active Frequency Shift Spectral Correction Method.

    PubMed

    Sun, Weifang; Yao, Bin; He, Yuchao; Chen, Binqiang; Zeng, Nianyin; He, Wangpeng

    2017-08-09

    Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages.

  16. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  17. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  18. The geostatistic-based spatial distribution variations of soil salts under long-term wastewater irrigation.

    PubMed

    Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe

    2014-10-01

    The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation.

  19. Geographic patterns and dynamics of Alaskan climate interpolated from a sparse station record

    USGS Publications Warehouse

    Fleming, Michael D.; Chapin, F. Stuart; Cramer, W.; Hufford, Gary L.; Serreze, Mark C.

    2000-01-01

    Data from a sparse network of climate stations in Alaska were interpolated to provide 1-km resolution maps of mean monthly temperature and precipitation-variables that are required at high spatial resolution for input into regional models of ecological processes and resource management. The interpolation model is based on thin-plate smoothing splines, which uses the spatial data along with a digital elevation model to incorporate local topography. The model provides maps that are consistent with regional climatology and with patterns recognized by experienced weather forecasters. The broad patterns of Alaskan climate are well represented and include latitudinal and altitudinal trends in temperature and precipitation and gradients in continentality. Variations within these broad patterns reflect both the weakening and reduction in frequency of low-pressure centres in their eastward movement across southern Alaska during the summer, and the shift of the storm tracks into central and northern Alaska in late summer. Not surprisingly, apparent artifacts of the interpolated climate occur primarily in regions with few or no stations. The interpolation model did not accurately represent low-level winter temperature inversions that occur within large valleys and basins. Along with well-recognized climate patterns, the model captures local topographic effects that would not be depicted using standard interpolation techniques. This suggests that similar procedures could be used to generate high-resolution maps for other high-latitude regions with a sparse density of data.

  20. Numerical simulation of supersonic and hypersonic inlet flow fields

    NASA Technical Reports Server (NTRS)

    Mcrae, D. Scott; Kontinos, Dean A.

    1995-01-01

    This report summarizes the research performed by North Carolina State University and NASA Ames Research Center under Cooperative Agreement NCA2-719, 'Numerical Simulation of Supersonic and Hypersonic Inlet Flow Fields". Four distinct rotated upwind schemes were developed and investigated to determine accuracy and practicality. The scheme found to have the best combination of attributes, including reduction to grid alignment with no rotation, was the cell centered non-orthogonal (CCNO) scheme. In 2D, the CCNO scheme improved rotation when flux interpolation was extended to second order. In 3D, improvements were less dramatic in all cases, with second order flux interpolation showing the least improvement over grid aligned upwinding. The reduction in improvement is attributed to uncertainty in determining optimum rotation angle and difficulty in performing accurate and efficient interpolation of the angle in 3D. The CCNO rotational technique will prove very useful for increasing accuracy when second order interpolation is not appropriate and will materially improve inlet flow solutions.

  1. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  2. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  3. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    USGS Publications Warehouse

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra Nevada Mountains.

  4. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  5. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  6. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  7. Proceedings of the NASA Workshop on Surface Fitting

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1982-01-01

    Surface fitting techniques and their utilization are addressed. Surface representation, approximation, and interpolation are discussed. Along with statistical estimation problems associated with surface fitting.

  8. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  9. Algebraic grid generation with corner singularities

    NASA Technical Reports Server (NTRS)

    Vinokur, M.; Lombard, C. K.

    1983-01-01

    A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.

  10. Performance of signal-to-noise ratio estimation for scanning electron microscope using autocorrelation Levinson-Durbin recursion model.

    PubMed

    Sim, K S; Lim, M S; Yeap, Z X

    2016-07-01

    A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  11. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  12. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  13. Effects of empty bins on image upscaling in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2017-07-01

    This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.

  14. Measurements of Wave Power in Wave Energy Converter Effectiveness Evaluation

    NASA Astrophysics Data System (ADS)

    Berins, J.; Berins, J.; Kalnacs, A.

    2017-08-01

    The article is devoted to the technical solution of alternative budget measuring equipment of the water surface gravity wave oscillation and the theoretical justification of the calculated oscillation power. This solution combines technologies such as lasers, WEB-camera image digital processing, interpolation of defined function at irregular intervals, volatility of discrete Fourier transformation for calculating the spectrum.

  15. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  16. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    PubMed

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  17. Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction.

    PubMed

    Yang, Qi; Zhang, Yanzhu; Zhao, Tiebiao; Chen, YangQuan

    2017-04-04

    Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    PubMed Central

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283

  19. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    PubMed

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  20. Applicability of Various Interpolation Approaches for High Resolution Spatial Mapping of Climate Data in Korea

    NASA Astrophysics Data System (ADS)

    Jo, A.; Ryu, J.; Chung, H.; Choi, Y.; Jeon, S.

    2018-04-01

    The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m) by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA). Automatic Weather System (AWS) and Automated Synoptic Observing System (ASOS) data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478) and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM) with 30 m resolution, inverse distance weighting (IDW), co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.

  1. New real-time algorithms for arbitrary, high precision function generation with applications to acoustic transducer excitation

    NASA Astrophysics Data System (ADS)

    Gaydecki, P.

    2009-07-01

    A system is described for the design, downloading and execution of arbitrary functions, intended for use with acoustic and low-frequency ultrasonic transducers in condition monitoring and materials testing applications. The instrumentation comprises a software design tool and a powerful real-time digital signal processor unit, operating at 580 million multiplication-accumulations per second (MMACs). The embedded firmware employs both an established look-up table approach and a new function interpolation technique to generate the real-time signals with very high precision and flexibility. Using total harmonic distortion (THD) analysis, the purity of the waveforms have been compared with those generated using traditional analogue function generators; this analysis has confirmed that the new instrument has a consistently superior signal-to-noise ratio.

  2. Spatial and spectral interpolation of ground-motion intensity measure observations

    USGS Publications Warehouse

    Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David

    2018-01-01

    Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.

  3. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  4. Adaptive Multilinear Tensor Product Wavelets

    DOE PAGES

    Weiss, Kenneth; Lindstrom, Peter

    2015-08-12

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less

  5. Holes in the ocean: Filling voids in bathymetric lidar data

    NASA Astrophysics Data System (ADS)

    Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie

    2011-04-01

    The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.

  6. Accurate Energy Transaction Allocation using Path Integration and Interpolation

    NASA Astrophysics Data System (ADS)

    Bhide, Mandar Mohan

    This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.

  7. On piecewise interpolation techniques for estimating solar radiation missing values in Kedah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu

    2014-12-04

    This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less

  8. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  9. Polynomial interpolation and sums of powers of integers

    NASA Astrophysics Data System (ADS)

    Cereceda, José Luis

    2017-02-01

    In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, Pk(n) and Qk(n), such that Pk(n) = Qk(n) = fk(n) for n = 1, 2,… , k, where fk(1), fk(2),… , fk(k) are k arbitrarily chosen (real or complex) values. Then, we focus on the case that fk(n) is given by the sum of powers of the first n positive integers Sk(n) = 1k + 2k + ṡṡṡ + nk, and show that Sk(n) admits the polynomial representations Sk(n) = Pk(n) and Sk(n) = Qk(n) for all n = 1, 2,… , and k ≥ 1, where the first representation involves the Eulerian numbers, and the second one the Stirling numbers of the second kind. Finally, we consider yet another polynomial formula for Sk(n) alternative to the well-known formula of Bernoulli.

  10. Empirical modeling of environment-enhanced fatigue crack propagation in structural alloys for component life prediction

    NASA Technical Reports Server (NTRS)

    Richey, Edward, III

    1995-01-01

    This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).

  11. Health State Monitoring of Bladed Machinery with Crack Growth Detection in BFG Power Plant Using an Active Frequency Shift Spectral Correction Method

    PubMed Central

    Sun, Weifang; Yao, Bin; He, Yuchao; Zeng, Nianyin; He, Wangpeng

    2017-01-01

    Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages. PMID:28792453

  12. A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge-Kutta method

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-08-01

    In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.

  13. TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1994-01-01

    TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.

  14. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

    PubMed

    Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

    2014-04-01

    Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.

  15. Generation of real-time mode high-resolution water vapor fields from GPS observations

    NASA Astrophysics Data System (ADS)

    Yu, Chen; Penna, Nigel T.; Li, Zhenhong

    2017-02-01

    Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.

  16. Principle Component Analysis with Incomplete Data: A simulation of R pcaMethods package in Constructing an Environmental Quality Index with Missing Data

    EPA Science Inventory

    Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...

  17. Surface mapping of spike potential fields: experienced EEGers vs. computerized analysis.

    PubMed

    Koszer, S; Moshé, S L; Legatt, A D; Shinnar, S; Goldensohn, E S

    1996-03-01

    An EEG epileptiform spike focus recorded with scalp electrodes is clinically localized by visual estimation of the point of maximal voltage and the distribution of its surrounding voltages. We compared such estimated voltage maps, drawn by experienced electroencephalographers (EEGers), with a computerized spline interpolation technique employed in the commercially available software package FOCUS. Twenty-two spikes were recorded from 15 patients during long-term continuous EEG monitoring. Maps of voltage distribution from the 28 electrodes surrounding the points of maximum change in slope (the spike maximum) were constructed by the EEGer. The same points of maximum spike and voltage distributions at the 29 electrodes were mapped by computerized spline interpolation and a comparison between the two methods was made. The findings indicate that the computerized spline mapping techniques employed in FOCUS construct voltage maps with similar maxima and distributions as the maps created by experienced EEGers. The dynamics of spike activity, including correlations, are better visualized using the computerized technique than by manual interpretation alone. Its use as a technique for spike localization is accurate and adds information of potential clinical value.

  18. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  19. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    NASA Astrophysics Data System (ADS)

    Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu

    2015-12-01

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.

  20. Using geographical information systems and cartograms as a health service quality improvement tool.

    PubMed

    Lovett, Derryn A; Poots, Alan J; Clements, Jake T C; Green, Stuart A; Samarasundera, Edgar; Bell, Derek

    2014-07-01

    Disease prevalence can be spatially analysed to provide support for service implementation and health care planning, these analyses often display geographic variation. A key challenge is to communicate these results to decision makers, with variable levels of Geographic Information Systems (GIS) knowledge, in a way that represents the data and allows for comprehension. The present research describes the combination of established GIS methods and software tools to produce a novel technique of visualising disease admissions and to help prevent misinterpretation of data and less optimal decision making. The aim of this paper is to provide a tool that supports the ability of decision makers and service teams within health care settings to develop services more efficiently and better cater to the population; this tool has the advantage of information on the position of populations, the size of populations and the severity of disease. A standard choropleth of the study region, London, is used to visualise total emergency admission values for Chronic Obstructive Pulmonary Disease and bronchiectasis using ESRI's ArcGIS software. Population estimates of the Lower Super Output Areas (LSOAs) are then used with the ScapeToad cartogram software tool, with the aim of visualising geography at uniform population density. An interpolation surface, in this case ArcGIS' spline tool, allows the creation of a smooth surface over the LSOA centroids for admission values on both standard and cartogram geographies. The final product of this research is the novel Cartogram Interpolation Surface (CartIS). The method provides a series of outputs culminating in the CartIS, applying an interpolation surface to a uniform population density. The cartogram effectively equalises the population density to remove visual bias from areas with a smaller population, while maintaining contiguous borders. CartIS decreases the number of extreme positive values not present in the underlying data as can be found in interpolation surfaces. This methodology provides a technique for combining simple GIS tools to create a novel output, CartIS, in a health service context with the key aim of improving visualisation communication techniques which highlight variation in small scale geographies across large regions. CartIS more faithfully represents the data than interpolation, and visually highlights areas of extreme value more than cartograms, when either is used in isolation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Spatiotemporal Interpolation Methods for the Application of Estimating Population Exposure to Fine Particulate Matter in the Contiguous U.S. and a Real-Time Web Application.

    PubMed

    Li, Lixin; Zhou, Xiaolu; Kalo, Marc; Piltner, Reinhard

    2016-07-25

    Appropriate spatiotemporal interpolation is critical to the assessment of relationships between environmental exposures and health outcomes. A powerful assessment of human exposure to environmental agents would incorporate spatial and temporal dimensions simultaneously. This paper compares shape function (SF)-based and inverse distance weighting (IDW)-based spatiotemporal interpolation methods on a data set of PM2.5 data in the contiguous U.S. Particle pollution, also known as particulate matter (PM), is composed of microscopic solids or liquid droplets that are so small that they can get deep into the lungs and cause serious health problems. PM2.5 refers to particles with a mean aerodynamic diameter less than or equal to 2.5 micrometers. Based on the error statistics results of k-fold cross validation, the SF-based method performed better overall than the IDW-based method. The interpolation results generated by the SF-based method are combined with population data to estimate the population exposure to PM2.5 in the contiguous U.S. We investigated the seasonal variations, identified areas where annual and daily PM2.5 were above the standards, and calculated the population size in these areas. Finally, a web application is developed to interpolate and visualize in real time the spatiotemporal variation of ambient air pollution across the contiguous U.S. using air pollution data from the U.S. Environmental Protection Agency (EPA)'s AirNow program.

  2. Spatiotemporal Interpolation Methods for the Application of Estimating Population Exposure to Fine Particulate Matter in the Contiguous U.S. and a Real-Time Web Application

    PubMed Central

    Li, Lixin; Zhou, Xiaolu; Kalo, Marc; Piltner, Reinhard

    2016-01-01

    Appropriate spatiotemporal interpolation is critical to the assessment of relationships between environmental exposures and health outcomes. A powerful assessment of human exposure to environmental agents would incorporate spatial and temporal dimensions simultaneously. This paper compares shape function (SF)-based and inverse distance weighting (IDW)-based spatiotemporal interpolation methods on a data set of PM2.5 data in the contiguous U.S. Particle pollution, also known as particulate matter (PM), is composed of microscopic solids or liquid droplets that are so small that they can get deep into the lungs and cause serious health problems. PM2.5 refers to particles with a mean aerodynamic diameter less than or equal to 2.5 micrometers. Based on the error statistics results of k-fold cross validation, the SF-based method performed better overall than the IDW-based method. The interpolation results generated by the SF-based method are combined with population data to estimate the population exposure to PM2.5 in the contiguous U.S. We investigated the seasonal variations, identified areas where annual and daily PM2.5 were above the standards, and calculated the population size in these areas. Finally, a web application is developed to interpolate and visualize in real time the spatiotemporal variation of ambient air pollution across the contiguous U.S. using air pollution data from the U.S. Environmental Protection Agency (EPA)’s AirNow program. PMID:27463722

  3. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  4. Developement of an Optimum Interpolation Analysis Method for the CYBER 205

    NASA Technical Reports Server (NTRS)

    Nestler, M. S.; Woollen, J.; Brin, Y.

    1985-01-01

    A state-of-the-art technique to assimilate the diverse observational database obtained during FGGE, and thus create initial conditions for numerical forecasts is described. The GLA optimum interpolation (OI) analysis method analyzes pressure, winds, and temperature at sea level, mixing ratio at six mandatory pressure levels up to 300 mb, and heights and winds at twelve levels up to 50 mb. Conversion to the CYBER 205 required a major re-write of the Amdahl OI code to take advantage of the CYBER vector processing capabilities. Structured programming methods were used to write the programs and this has resulted in a modular, understandable code. Among the contributors to the increased speed of the CYBER code are a vectorized covariance-calculation routine, an extremely fast matrix equation solver, and an innovative data search and sort technique.

  5. VizieR Online Data Catalog: Surface gravity determination in late-type stars (Morel+, 2012)

    NASA Astrophysics Data System (ADS)

    Morel, T.; Miglio, A.

    2012-06-01

    The frequency of maximum oscillation power measured in dwarfs and giants exhibiting solar-like pulsations provides a precise, and potentially accurate, inference of the stellar surface gravity. An extensive comparison for about 40 well-studied pulsating stars with gravities derived using classical methods (ionization balance, pressure-sensitive spectral features or location with respect to evolutionary tracks) supports the validity of this technique and reveals an overall remarkable agreement with mean differences not exceeding 0.05dex (although with a dispersion of up to ~0.2dex). It is argued that interpolation in theoretical isochrones may be the most precise way of estimating the gravity by traditional means in nearby dwarfs. Attention is drawn to the usefulness of seismic targets as benchmarks in the context of large-scale surveys. (1 data file).

  6. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  7. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  8. A parallel architecture of interpolated timing recovery for high- speed data transfer rate and wide capture-range

    NASA Astrophysics Data System (ADS)

    Higashino, Satoru; Kobayashi, Shoei; Yamagami, Tamotsu

    2007-06-01

    High data transfer rate has been demanded for data storage devices along increasing the storage capacity. In order to increase the transfer rate, high-speed data processing techniques in read-channel devices are required. Generally, parallel architecture is utilized for the high-speed digital processing. We have developed a new architecture of Interpolated Timing Recovery (ITR) to achieve high-speed data transfer rate and wide capture-range in read-channel devices for the information storage channels. It facilitates the parallel implementation on large-scale-integration (LSI) devices.

  9. Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation

    PubMed Central

    Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed

    2012-01-01

    This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186

  10. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  11. Effective atomic numbers of some tissue substitutes by different methods: A comparative study.

    PubMed

    Singh, Vishwanath P; Badiger, N M

    2014-01-01

    Effective atomic numbers of some human organ tissue substitutes such as polyethylene terephthalate, red articulation wax, paraffin 1, paraffin 2, bolus, pitch, polyphenylene sulfide, polysulfone, polyvinylchloride, and modeling clay have been calculated by four different methods like Auto-Zeff, direct, interpolation, and power law. It was found that the effective atomic numbers computed by Auto-Zeff, direct and interpolation methods were in good agreement for intermediate energy region (0.1 MeV < E < 5 MeV) where the Compton interaction dominates. A large difference in effective atomic numbers by direct method and Auto-Zeff was observed in photo-electric and pair-production regions. Effective atomic numbers computed by power law were found to be close to direct method in photo-electric absorption region. The Auto-Zeff, direct and interpolation methods were found to be in good agreement for computation of effective atomic numbers in intermediate energy region (100 keV < E < 10 MeV). The direct method was found to be appropriate method for computation of effective atomic numbers in photo-electric region (10 keV < E < 100 keV). The tissue equivalence of the tissue substitutes is possible to represent by any method for computation of effective atomic number mentioned in the present study. An accurate estimation of Rayleigh scattering is required to eliminate effect of molecular, chemical, or crystalline environment of the atom for estimation of gamma interaction parameters.

  12. Effective atomic numbers of some tissue substitutes by different methods: A comparative study

    PubMed Central

    Singh, Vishwanath P.; Badiger, N. M.

    2014-01-01

    Effective atomic numbers of some human organ tissue substitutes such as polyethylene terephthalate, red articulation wax, paraffin 1, paraffin 2, bolus, pitch, polyphenylene sulfide, polysulfone, polyvinylchloride, and modeling clay have been calculated by four different methods like Auto-Zeff, direct, interpolation, and power law. It was found that the effective atomic numbers computed by Auto-Zeff, direct and interpolation methods were in good agreement for intermediate energy region (0.1 MeV < E < 5 MeV) where the Compton interaction dominates. A large difference in effective atomic numbers by direct method and Auto-Zeff was observed in photo-electric and pair-production regions. Effective atomic numbers computed by power law were found to be close to direct method in photo-electric absorption region. The Auto-Zeff, direct and interpolation methods were found to be in good agreement for computation of effective atomic numbers in intermediate energy region (100 keV < E < 10 MeV). The direct method was found to be appropriate method for computation of effective atomic numbers in photo-electric region (10 keV < E < 100 keV). The tissue equivalence of the tissue substitutes is possible to represent by any method for computation of effective atomic number mentioned in the present study. An accurate estimation of Rayleigh scattering is required to eliminate effect of molecular, chemical, or crystalline environment of the atom for estimation of gamma interaction parameters. PMID:24600169

  13. An Accurate Scatter Measurement and Correction Technique for Cone Beam Breast CT Imaging Using Scanning Sampled Measurement (SSM) Technique.

    PubMed

    Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu

    2006-02-28

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  14. Predictions of biochar production and torrefaction performance from sugarcane bagasse using interpolation and regression analysis.

    PubMed

    Chen, Wei-Hsin; Hsu, Hung-Jen; Kumar, Gopalakrishnan; Budzianowski, Wojciech M; Ong, Hwai Chyuan

    2017-12-01

    This study focuses on the biochar formation and torrefaction performance of sugarcane bagasse, and they are predicted using the bilinear interpolation (BLI), inverse distance weighting (IDW) interpolation, and regression analysis. It is found that the biomass torrefied at 275°C for 60min or at 300°C for 30min or longer is appropriate to produce biochar as alternative fuel to coal with low carbon footprint, but the energy yield from the torrefaction at 300°C is too low. From the biochar yield, enhancement factor of HHV, and energy yield, the results suggest that the three methods are all feasible for predicting the performance, especially for the enhancement factor. The power parameter of unity in the IDW method provides the best predictions and the error is below 5%. The second order in regression analysis gives a more reasonable approach than the first order, and is recommended for the predictions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  16. Multilevel Green's function interpolation method for scattering from composite metallic and dielectric objects.

    PubMed

    Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou

    2008-10-01

    A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).

  17. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  18. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.

  19. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424

  20. Spatial interpolation of forest conditions using co-conditional geostatistical simulation

    Treesearch

    H. Todd Mowrer

    2000-01-01

    In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...

  1. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  2. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  3. On the interpolation of volumetric water content in research catchments

    NASA Astrophysics Data System (ADS)

    Dlamini, Phesheya; Chaplot, Vincent

    Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.

  4. The Canadian Precipitation Analysis (CaPA): Evaluation of the statistical interpolation scheme

    NASA Astrophysics Data System (ADS)

    Evans, Andrea; Rasmussen, Peter; Fortin, Vincent

    2013-04-01

    CaPA (Canadian Precipitation Analysis) is a data assimilation system which employs statistical interpolation to combine observed precipitation with gridded precipitation fields produced by Environment Canada's Global Environmental Multiscale (GEM) climate model into a final gridded precipitation analysis. Precipitation is important in many fields and applications, including agricultural water management projects, flood control programs, and hydroelectric power generation planning. Precipitation is a key input to hydrological models, and there is a desire to have access to the best available information about precipitation in time and space. The principal goal of CaPA is to produce this type of information. In order to perform the necessary statistical interpolation, CaPA requires the estimation of a semi-variogram. This semi-variogram is used to describe the spatial correlations between precipitation innovations, defined as the observed precipitation amounts minus the GEM forecasted amounts predicted at the observation locations. Currently, CaPA uses a single isotropic variogram across the entire analysis domain. The present project investigates the implications of this choice by first conducting a basic variographic analysis of precipitation innovation data across the Canadian prairies, with specific interest in identifying and quantifying potential anisotropy within the domain. This focus is further expanded by identifying the effect of storm type on the variogram. The ultimate goal of the variographic analysis is to develop improved semi-variograms for CaPA that better capture the spatial complexities of precipitation over the Canadian prairies. CaPA presently applies a Box-Cox data transformation to both the observations and the GEM data, prior to the calculation of the innovations. The data transformation is necessary to satisfy the normal distribution assumption, but introduces a significant bias. The second part of the investigation aims at devising a bias correction scheme based on a moving-window averaging technique. For both the variogram and bias correction components of this investigation, a series of trial runs are conducted to evaluate the impact of these changes on the resulting CaPA precipitation analyses.

  5. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  6. Effect of data gaps on correlation dimension computed from light curves of variable stars

    NASA Astrophysics Data System (ADS)

    George, Sandip V.; Ambika, G.; Misra, R.

    2015-11-01

    Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.

  7. Analytic approximations of Von Kármán plate under arbitrary uniform pressure—equations in integral form

    NASA Astrophysics Data System (ADS)

    Zhong, XiaoXu; Liao, ShiJun

    2018-01-01

    Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.

  8. Digital image processing of Seabeam bathymetric data for structural studies of seamounts near the East Pacific Rise

    NASA Technical Reports Server (NTRS)

    Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.

    1984-01-01

    The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.

  9. Tidal estimation in the Atlantic and Indian Oceans, 3 deg x 3 deg solution

    NASA Technical Reports Server (NTRS)

    Sanchez, Braulio V.; Rao, Desiraju B.; Steenrod, Stephen D.

    1987-01-01

    An estimation technique was developed to extrapolate tidal amplitudes and phases over entire ocean basins using existing gauge data and the altimetric measurements provided by satellite oceanography. The technique was previously tested. Some results obtained by using a 3 deg by 3 deg grid are presented. The functions used in the interpolation are the eigenfunctions of the velocity (Proudman functions) which are computed numerically from a knowledge of the basin's bottom topography, the horizontal plan form and the necessary boundary conditions. These functions are characteristic of the particular basin. The gravitational normal modes of the basin are computed as part of the investigation; they are used to obtain the theoretical forced solutions for the tidal constituents. The latter can provide the simulated data for the testing of the method and serve as a guide in choosing the most energetic functions for the interpolation.

  10. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  11. Developing a virtual reality application for training nuclear power plant operators: setting up a database containing dose rates in the refuelling plant.

    PubMed

    Ródenas, J; Zarza, I; Burgos, M C; Felipe, A; Sánchez-Mayoral, M L

    2004-01-01

    Operators in Nuclear Power Plants can receive high doses during refuelling operations. A training programme for simulating refuelling operations will be useful in reducing the doses received by workers as well as minimising operation time. With this goal in mind, a virtual reality application is developed within the framework of the CIPRES project. The application requires doses, both instantaneous and accumulated, to be displayed at all times during operator training. Therefore, it is necessary to set up a database containing dose rates at every point in the refuelling plant. This database is based on radiological protection surveillance data measured in the plant during refuelling operations. Some interpolation routines have been used to estimate doses through the refuelling plant. Different assumptions have been adopted in order to perform the interpolation and obtain consistent data. In this paper, the procedures developed to set up the dose database for the virtual reality application are presented and analysed.

  12. Irreconcilable difference between quantum walks and adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.; Meyer, David A.

    2016-06-01

    Continuous-time quantum walks and adiabatic quantum evolution are two general techniques for quantum computing, both of which are described by Hamiltonians that govern their evolutions by Schrödinger's equation. In the former, the Hamiltonian is fixed, while in the latter, the Hamiltonian varies with time. As a result, their formulations of Grover's algorithm evolve differently through Hilbert space. We show that this difference is fundamental; they cannot be made to evolve along each other's path without introducing structure more powerful than the standard oracle for unstructured search. For an adiabatic quantum evolution to evolve like the quantum walk search algorithm, it must interpolate between three fixed Hamiltonians, one of which is complex and introduces structure that is stronger than the oracle for unstructured search. Conversely, for a quantum walk to evolve along the path of the adiabatic search algorithm, it must be a chiral quantum walk on a weighted, directed star graph with structure that is also stronger than the oracle for unstructured search. Thus, the two techniques, although similar in being described by Hamiltonians that govern their evolution, compute by fundamentally irreconcilable means.

  13. Spatial variability of soil available phosphorous and potassium at three different soils located in Pannonian Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Pereira, Paulo; Đurđević, Boris

    2017-04-01

    Information on spatial distribution of soil nutrients in agroecosystems is critical for improving productivity and reducing environmental pressures in intensive farmed soils. In this context, spatial prediction of soil properties should be accurate. In this study we analyse 704 data of soil available phosphorus (AP) and potassium (AK); the data derive from soil samples collected across three arable fields in Baranja region (Croatia) in correspondence of different soil types: Cambisols (169 samples), Chernozems (131 samples) and Gleysoils (404 samples). The samples are collected in a regular sampling grid (distance 225 x 225 m). Several geostatistical techniques (Inverse Distance to a Weight (IDW) with the power of 1, 2 and 3; Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multiquadratic (MTQ), Completely Regularized Spline (CRS), Spline with Tension (SPT) and Thin Plate Spline (TPS); and Local Polynomial (LP) with the power of 1 and 2; two geostatistical techniques -Ordinary Kriging - OK and Simple Kriging - SK) were tested in order to evaluate the most accurate spatial variability maps using criteria of lowest RMSE during cross validation technique. Soil parameters varied considerably throughout the studied fields and their coefficient of variations ranged from 31.4% to 37.7% and from 19.3% to 27.1% for soil AP and AK, respectively. The experimental variograms indicate a moderate spatial dependence for AP and strong spatial dependence for all three locations. The best spatial predictor for AP at Chernozem field was Simple kriging (RMSE=61.711), and for AK inverse multiquadratic (RMSE=44.689). The least accurate technique was Thin plate spline (AP) and Inverse distance to a weight with a power of 1 (AK). Radial basis function models (Spline with Tension for AP at Gleysoil and Cambisol and Completely Regularized Spline for AK at Gleysol) were the best predictors, while Thin Plate Spline models were the least accurate in all three cases. The best interpolator for AK at Cambisol was the local polynomial with the power of 2 (RMSE=33.943), while the least accurate was Thin Plate Spline (RMSE=39.572).

  14. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  15. Short term spatio-temporal variability of soil water-extractable calcium and magnesium after a low severity grassland fire in Lithuania.

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Martin, David

    2014-05-01

    Fire has important impacts on soil nutrient spatio-temporal distribution (Outeiro et al., 2008). This impact depends on fire severity, topography of the burned area, type of soil and vegetation affected, and the meteorological conditions post-fire. Fire produces a complex mosaic of impacts in soil that can be extremely variable at small plot scale in the space and time. In order to assess and map such a heterogeneous distribution, the test of interpolation methods is fundamental to identify the best estimator and to have a better understanding of soil nutrients spatial distribution. The objective of this work is to identify the short-term spatial variability of water-extractable calcium and magnesium after a low severity grassland fire. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. Four days after the fire, it was designed in a burned area a plot with 400 m2 (20 x 20 m with 5 m space between sampling points). Twenty five samples from top soil (0-5 cm) were collected immediately after the fire (IAF), 2, 5, 7 and 9 months after the fire (a total of 125 in all sampling dates). The original data of water-extractable calcium and magnesium did not respected the Gaussian distribution, thus a neperian logarithm (ln) was applied in order to normalize data. Significant differences of water-extractable calcium and magnesium among sampling dates were carried out with the Anova One-way test using the ln data. In order to assess the spatial variability of water-extractable calcium and magnesium, we tested several interpolation methods as Ordinary Kriging (OK), Inverse Distance to a Weight (IDW) with the power of 1, 2, 3 and 4, Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multilog (MTG), Multiquadratic (MTQ) Natural Cubic Spline (NCS) and Thin Plate Spline (TPS) - and Local Polynomial (LP) with the power of 1 and 2. Interpolation tests were carried out with Ln data. The best interpolation method was assessed using the cross validation method. Cross-validation was obtained by taking each observation in turn out of the sample pool and estimating from the remaining ones. The errors produced (observed-predicted) are used to evaluate the performance of each method. With these data, the mean error (ME) and root mean square error (RMSE) were calculated. The best method was the one which had the lower RMSE (Pereira et al. in press). The results shown significant differences among sampling dates in the water-extractable calcium (F= 138.78, p< 0.001) and extractable magnesium (F= 160.66; p< 0.001). Water-extractable calcium and magnesium was high IAF decreasing until 7 months after the fire, rising in the last sampling date. Among the tested methods, the most accurate to interpolate the water-extractable calcium were: IAF-IDW1; 2 Months-IDW1; 5 months-OK; 7 Months-IDW4 and 9 Months-IDW3. In relation to water-extractable magnesium the best interpolation techniques were: IAF-IDW2; 2 Months-IDW1; 5 months- IDW3; 7 Months-TPS and 9 Months-IDW1. These results suggested that the spatial variability of these water-extractable is variable with the time. The causes of this variability will be discussed during the presentation. References Outeiro, L., Aspero, F., Ubeda, X. (2008) Geostatistical methods to study spatial variability of soil cation after a prescribed fire and rainfall. Catena, 74: 310-320. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. Modelling the impacts of wildfire on ash thickness in a short-term period, Land Degradation and Development, (In Press), DOI: 10.1002/ldr.2195

  16. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  17. Airborne laser scanning for forest health status assessment and radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Novotny, Jan; Zemek, Frantisek; Pikl, Miroslav; Janoutova, Ruzena

    2013-04-01

    Structural parameters of forest stands/ecosystems are an important complementary source of information to spectral signatures obtained from airborne imaging spectroscopy when quantitative assessment of forest stands are in the focus, such as estimation of forest biomass, biochemical properties (e.g. chlorophyll /water content), etc. The parameterization of radiative transfer (RT) models used in latter case requires three-dimensional spatial distribution of green foliage and woody biomass. Airborne LiDAR data acquired over forest sites bears these kinds of 3D information. The main objective of the study was to compare the results from several approaches to interpolation of digital elevation model (DEM) and digital surface model (DSM). We worked with airborne LiDAR data with different density (TopEye Mk II 1,064nm instrument, 1-5 points/m2) acquired over the Norway spruce forests situated in the Beskydy Mountains, the Czech Republic. Three different interpolation algorithms with increasing complexity were tested: i/Nearest neighbour approach implemented in the BCAL software package (Idaho Univ.); ii/Averaging and linear interpolation techniques used in the OPALS software (Vienna Univ. of Technology); iii/Active contour technique implemented in the TreeVis software (Univ. of Freiburg). We defined two spatial resolutions for the resulting coupled raster DEMs and DSMs outputs: 0.4 m and 1 m, calculated by each algorithm. The grids correspond to the same spatial resolutions of hyperspectral imagery data for which the DEMs were used in a/geometrical correction and b/building a complex tree models for radiative transfer modelling. We applied two types of analyses when comparing between results from the different interpolations/raster resolution: 1/calculated DEM or DSM between themselves; 2/comparison with field data: DEM with measurements from referential GPS, DSM - field tree alometric measurements, where tree height was calculated as DSM-DEM. The results of the analyses show that: 1/averaging techniques tend to underestimate the tree height and the generated surface does not follow the first LiDAR echoes both for 1 m and 0.4 m pixel size; 2/we did not find any significant difference between tree heights calculated by nearest neighbour algorithm and the active contour technique for 1 m pixel output but the difference increased with finer resolution (0.4 m); 3/the accuracy of the DEMs calculated by tested algorithms is similar.

  18. Spatial Scale Variability of NH3 and Impacts to interpolated Concentration Grids

    EPA Science Inventory

    Over the past decade, reduced nitrogen (NH3, NH4) has become an important component of atmospheric nitrogen deposition due to increases in agricultural activities and reductions in oxidized sulfur and nitrogen emissions from the power sector and mobile sources. Reduced nitrogen i...

  19. Development of an Atmospheric Dispersion Model for Heavier-Than-Air Gas Mixtures. Volume 3. DEGADIS User’s Manual.

    DTIC Science & Technology

    1985-05-01

    interpolates between a pair of points based on a list of supplied values (C-2). () ALP estimates the ambient wind profile power a by minimizing the integral...Typical Values of Surface Roughness 7 1.2. Representative Konin-Obukhov Lengths arnd Power Law Exponents for Different Atmospheric Stabilities 8 IV.1...a constant in power law wind profile 0 constant in a correlation r gama function 𔄁 ratio of (p - pa)/Cc 6 constant in a correlation y AMonin-Obukhov

  20. BRIEF REPORT: A simple interpolation formula for the spectra of power-law and log potentials

    NASA Astrophysics Data System (ADS)

    Hall, Richard L.

    2000-06-01

    Non-relativistic potential models are considered of the pure power V(r) = sgn(q) r q and logarithmic V(r) = ln (r) types. It is shown that, from the spectral viewpoint, these potentials are actually in a single family. The log spectra can be obtained from the power spectra by the limit q→0 taken in a smooth representation Pnl(q) for the eigenvalues Enl(q). A simple approximation formula is developed which yields the first 30 eigenvalues with an error less than 0.04%.

  1. Dynamic mesh for TCAD modeling with ECORCE

    NASA Astrophysics Data System (ADS)

    Michez, A.; Boch, J.; Touboul, A.; Saigné, F.

    2016-08-01

    Mesh generation for TCAD modeling is challenging. Because densities of carriers can change by several orders of magnitude in thin areas, a significant change of the solution can be observed for two very similar meshes. The mesh must be defined at best to minimize this change. To address this issue, a criterion based on polynomial interpolation on adjacent nodes is proposed that adjusts accurately the mesh to the gradients of Degrees of Freedom. Furthermore, a dynamic mesh that follows changes of DF in DC and transient mode is a powerful tool for TCAD users. But, in transient modeling, adding nodes to a mesh induces oscillations in the solution that appears as spikes at the current collected at the contacts. This paper proposes two schemes that solve this problem. Examples show that using these techniques, the dynamic mesh generator of the TCAD tool ECORCE handle semiconductors devices in DC and transient mode.

  2. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  3. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  4. Ectopic beats in approximate entropy and sample entropy-based HRV assessment

    NASA Astrophysics Data System (ADS)

    Singh, Butta; Singh, Dilbag; Jaryal, A. K.; Deepak, K. K.

    2012-05-01

    Approximate entropy (ApEn) and sample entropy (SampEn) are the promising techniques for extracting complex characteristics of cardiovascular variability. Ectopic beats, originating from other than the normal site, are the artefacts contributing a serious limitation to heart rate variability (HRV) analysis. The approaches like deletion and interpolation are currently in use to eliminate the bias produced by ectopic beats. In this study, normal R-R interval time series of 10 healthy and 10 acute myocardial infarction (AMI) patients were analysed by inserting artificial ectopic beats. Then the effects of ectopic beats editing by deletion, degree-zero and degree-one interpolation on ApEn and SampEn have been assessed. Ectopic beats addition (even 2%) led to reduced complexity, resulting in decreased ApEn and SampEn of both healthy and AMI patient data. This reduction has been found to be dependent on level of ectopic beats. Editing of ectopic beats by interpolation degree-one method is found to be superior to other methods.

  5. Spatial interpolation of pesticide drift from hand-held knapsack sprayers used in potato production

    NASA Astrophysics Data System (ADS)

    Garcia-Santos, Glenda; Pleschberger, Martin; Scheiber, Michael; Pilz, Jürgen

    2017-04-01

    Tropical mountainous regions in developing countries are often neglected in research and policy but represent key areas to be considered if sustainable agricultural and rural development is to be promoted. One example is the lack of information of pesticide drift soil deposition, which can support pesticide risk assessment for soil, surface water, bystanders and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, were obtained within one of the highest potato producing regions in Colombia. Based on the empirical data, different spatial interpolation techniques i.e. Thiessen, inverse distance squared weighting, co-kriging, pair-copulas and drift curves depending on distance and wind speed were tested and optimized. Results of the best performing spatial interpolation methods, suitable curves to assess mean relative drift and implications on risk assessment studies will be presented.

  6. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.

    PubMed

    Han, Dustin T; Suhail, Mohamed; Ragan, Eric D

    2018-04-01

    Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

  7. Accelerating parallel transmit array B1 mapping in high field MRI with slice undersampling and interpolation by kriging.

    PubMed

    Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans

    2014-08-01

    Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.

  8. Minimizing Interpolation Bias and Precision Error in In Vivo μCT-based Measurements of Bone Structure and Dynamics

    PubMed Central

    de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry

    2016-01-01

    In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342

  9. Minimizing Interpolation Bias and Precision Error in In Vivo µCT-Based Measurements of Bone Structure and Dynamics.

    PubMed

    de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry

    2016-08-01

    In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.

  10. Context dependent anti-aliasing image reconstruction

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.; Hunt, A.; Arlia, N.

    1989-01-01

    Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.

  11. Ambient Ozone Exposure in Czech Forests: A GIS-Based Approach to Spatial Distribution Assessment

    PubMed Central

    Hůnová, I.; Horálek, J.; Schreiberová, M.; Zapletal, M.

    2012-01-01

    Ambient ozone (O3) is an important phytotoxic pollutant, and detailed knowledge of its spatial distribution is becoming increasingly important. The aim of the paper is to compare different spatial interpolation techniques and to recommend the best approach for producing a reliable map for O3 with respect to its phytotoxic potential. For evaluation we used real-time ambient O3 concentrations measured by UV absorbance from 24 Czech rural sites in the 2007 and 2008 vegetation seasons. We considered eleven approaches for spatial interpolation used for the development of maps for mean vegetation season O3 concentrations and the AOT40F exposure index for forests. The uncertainty of maps was assessed by cross-validation analysis. The root mean square error (RMSE) of the map was used as a criterion. Our results indicate that the optimal interpolation approach is linear regression of O3 data and altitude with subsequent interpolation of its residuals by ordinary kriging. The relative uncertainty of the map of O3 mean for the vegetation season is less than 10%, using the optimal method as for both explored years, and this is a very acceptable value. In the case of AOT40F, however, the relative uncertainty of the map is notably worse, reaching nearly 20% in both examined years. PMID:22566757

  12. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  13. Assessing Fire Weather Index using statistical downscaling and spatial interpolation techniques in Greece

    NASA Astrophysics Data System (ADS)

    Karali, Anna; Giannakopoulos, Christos; Frias, Maria Dolores; Hatzaki, Maria; Roussos, Anargyros; Casanueva, Ana

    2013-04-01

    Forest fires have always been present in the Mediterranean ecosystems, thus they constitute a major ecological and socio-economic issue. The last few decades though, the number of forest fires has significantly increased, as well as their severity and impact on the environment. Local fire danger projections are often required when dealing with wild fire research. In the present study the application of statistical downscaling and spatial interpolation methods was performed to the Canadian Fire Weather Index (FWI), in order to assess forest fire risk in Greece. The FWI is used worldwide (including the Mediterranean basin) to estimate the fire danger in a generalized fuel type, based solely on weather observations. The meteorological inputs to the FWI System are noon values of dry-bulb temperature, air relative humidity, 10m wind speed and precipitation during the previous 24 hours. The statistical downscaling methods are based on a statistical model that takes into account empirical relationships between large scale variables (used as predictors) and local scale variables. In the framework of the current study the statistical downscaling portal developed by the Santander Meteorology Group (https://www.meteo.unican.es/downscaling) in the framework of the EU project CLIMRUN (www.climrun.eu) was used to downscale non standard parameters related to forest fire risk. In this study, two different approaches were adopted. Firstly, the analogue downscaling technique was directly performed to the FWI index values and secondly the same downscaling technique was performed indirectly through the meteorological inputs of the index. In both cases, the statistical downscaling portal was used considering the ERA-Interim reanalysis as predictands due to the lack of observations at noon. Additionally, a three-dimensional (3D) interpolation method of position and elevation, based on Thin Plate Splines (TPS) was used, to interpolate the ERA-Interim data used to calculate the index. Results from this method were compared with the statistical downscaling results obtained from the portal. Finally, FWI was computed using weather observations obtained from the Hellenic National Meteorological Service, mainly in the south continental part of Greece and a comparison with the previous results was performed.

  14. Reliability of the Parabola Approximation Method in Heart Rate Variability Analysis Using Low-Sampling-Rate Photoplethysmography.

    PubMed

    Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol

    2017-10-24

    Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.

  15. A practical implementation of wave front construction for 3-D isotropic media

    NASA Astrophysics Data System (ADS)

    Chambers, K.; Kendall, J.-M.

    2008-06-01

    Wave front construction (WFC) methods are a useful tool for tracking wave fronts and are a natural extension to standard ray shooting methods. Here we describe and implement a simple WFC method that is used to interpolate wavefield properties throughout a 3-D heterogeneous medium. Our approach differs from previous 3-D WFC procedures primarily in the use of a ray interpolation scheme, based on approximating the wave front as a `locally spherical' surface and a `first arrival mode', which reduces computation times, where only first arrivals are required. Both of these features have previously been included in 2-D WFC algorithms; however, until now they have not been extended to 3-D systems. The wave front interpolation scheme allows for rays to be traced from a nearly arbitrary distribution of take-off angles, and the calculation of derivatives with respect to take-off angles is not required for wave front interpolation. However, in regions of steep velocity gradient, the locally spherical approximation is not valid, and it is necessary to backpropagate rays to a sufficiently homogenous region before interpolation of the new ray. Our WFC technique is illustrated using a realistic velocity model, based on a North Sea oil reservoir. We examine wavefield quantities such as traveltimes, ray angles, source take-off angles and geometrical spreading factors, all of which are interpolated on to a regular grid. We compare geometrical spreading factors calculated using two methods: using the ray Jacobian and by taking the ratio of a triangular area of wave front to the corresponding solid angle at the source. The results show that care must be taken when using ray Jacobians to calculate geometrical spreading factors, as the poles of the source coordinate system produce unreliable values, which can be spread over a large area, as only a few initial rays are traced in WFC. We also show that the use of the first arrival mode can reduce computation time by ~65 per cent, with the accuracy of the interpolated traveltimes, ray angles and source take-off angles largely unchanged. However, the first arrival mode does lead to inaccuracies in interpolated angles near caustic surfaces, as well as small variations in geometrical spreading factors for ray tubes that have passed through caustic surfaces.

  16. Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.

    2016-12-01

    Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.

  17. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  18. Development of a low-budget, remote, solar powered, and self-operating rain gauge for spatial rainfall real time data monitoring in pristine and urban areas

    NASA Astrophysics Data System (ADS)

    Shafiei Shiva, J.; Chandler, D. G.; Nucera, K. J.; Valinski, N.

    2016-12-01

    Precipitation is one of the main components of the hydrological cycle and simulations and it is generally stated as an average value for the study area. However, due to high spatial variability of precipitation in some situations, more precise local data is required. In order to acquire the precipitation data, interpolation of neighbor gauged precipitation data is used which is the most affordable technique for a watershed scale study. Moreover, novel spatial rain measurements such as Doppler radars and satellite image processing have been widely used in recent studies. Although, due to impediments in the radar data processing and the effect of the local setting on the accuracy of the interpolated data, the local measurement of the precipitation remains as one of the most reliable approaches in attaining rain data. In this regard, development of a low-budget, remote, solar powered, and self-operating rain gauge for spatial rainfall real time data monitoring for pristine and urban areas has been presented in this research. The proposed rain gauge consists of two main parts: (a) hydraulic instruments and (b) electrical devices. The hydraulic instruments will collect the rain fall and store it in a PVC container which is connected to the high sensitivity pressure transducer systems. These electrical devices will transmit the data via cellphone networks which will be available for further analysis in less than one minute, after processing. The above-mentioned real time rain fall data can be employed in the precipitation measurement and the evaporation estimation. Due to the installed solar panel for battery recharging and designed siphon system for draining cumulative rain, this device is considered as a self-operating rain gauge. At this time, more than ten rain gauges are built and installed in the urban area of Syracuse, NY. Furthermore, these data are also useful for calibration and validation of data obtained by other rain gauging devices and estimation techniques. Moreover, remote data communication challenges in urban area are demonstrated and the solution for these problems have been addressed. Finally, the rainfall data obtained from the presented rain gauge has been compared with other measuring systems.

  19. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  20. Comparison of techniques for approximating ocean bottom topography in a wave-refraction computer model

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1975-01-01

    A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.

  1. Resolving the structure of the Galactic foreground using Herschel measurements and the Kriging technique

    NASA Astrophysics Data System (ADS)

    Pinter, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Zahorecz, S.; Tóth, L. V.

    2018-05-01

    Investigating the distant extragalactic Universe requires a subtraction of the Galactic foreground. One of the major difficulties deriving the fine structure of the galactic foreground is the embedded foreground and background point sources appearing in the given fields. It is especially so in the infrared. We report our study subtracting point sources from Herschel images with Kriging, an interpolation method where the interpolated values are modelled by a Gaussian process governed by prior covariances. Using the Kriging method on Herschel multi-wavelength observations the structure of the Galactic foreground can be studied with much higher resolution than previously, leading to a better foreground subtraction at the end.

  2. A Residual Kriging method for the reconstruction of 3D high-resolution meteorological fields from airborne and surface observations

    NASA Astrophysics Data System (ADS)

    Laiti, Lavinia; Zardi, Dino; de Franceschi, Massimiliano; Rampanelli, Gabriele

    2013-04-01

    Manned light aircrafts and remotely piloted aircrafts represent very valuable and flexible measurement platforms for atmospheric research, as they are able to provide high temporal and spatial resolution observations of the atmosphere above the ground surface. In the present study the application of a geostatistical interpolation technique called Residual Kriging (RK) is proposed for the mapping of airborne measurements of scalar quantities over regularly spaced 3D grids. In RK the dominant (vertical) trend component underlying the original data is first extracted to filter out local anomalies, then the residual field is separately interpolated and finally added back to the trend; the determination of the interpolation weights relies on the estimate of the characteristic covariance function of the residuals, through the computation and modelling of their semivariogram function. RK implementation also allows for the inference of the characteristic spatial scales of variability of the target field and its isotropization, and for an estimate of the interpolation error. The adopted test-bed database consists in a series of flights of an instrumented motorglider exploring the atmosphere of two valleys near the city of Trento (in the southeastern Italian Alps), performed on fair-weather summer days. RK method is used to reconstruct fully 3D high-resolution fields of potential temperature and mixing ratio for specific vertical slices of the valley atmosphere, integrating also ground-based measurements from the nearest surface weather stations. From RK-interpolated meteorological fields, fine-scale features of the atmospheric boundary layer developing over the complex valley topography in connection with the occurrence of thermally-driven slope and valley winds, are detected. The performance of RK mapping is also tested against two other commonly adopted interpolation methods, i.e. the Inverse Distance Weighting and the Delaunay triangulation methods, comparing the results of a cross-validation procedure.

  3. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.

  4. Development of a Real-Time Hardware-in-the-Loop Power Systems Simulation Platform to Evaluate Commercial Microgrid Controllers

    DTIC Science & Technology

    2016-02-19

    power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models...custom control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions...for the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux

  5. TR-1203: Development of a Real-Time Hardware-in-the-Loop Power Systems Simulation Platform to Evaluate Commercial Microgrid Controllers

    DTIC Science & Technology

    2016-02-23

    power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models or...control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions to...the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux point

  6. Short-Term Memory; An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Reynolds, Donald; Rosenblatt, Richard D.

    This annotated bibliography on memory is divided into 12 areas: information theory; proactive and retroactive interference and interpolated activities; set, subject strategies, and coding techniques; paired associate studies; simultaneous listening and memory span studies; rate and mode of stimulus presentation; rate and order of recall, and…

  7. Super-resolution technique for CW lidar using Fourier transform reordering and Richardson-Lucy deconvolution.

    PubMed

    Campbell, Joel F; Lin, Bing; Nehrir, Amin R; Harrison, F Wallace; Obland, Michael D

    2014-12-15

    An interpolation method is described for range measurements of high precision altimetry with repeating intensity modulated continuous wave (IM-CW) lidar waveforms using binary phase shift keying (BPSK), where the range profile is determined by means of a cross-correlation between the digital form of the transmitted signal and the digitized return signal collected by the lidar receiver. This method uses reordering of the array elements in the frequency domain to convert a repeating synthetic pulse signal to single highly interpolated pulse. This is then enhanced further using Richardson-Lucy deconvolution to greatly enhance the resolution of the pulse. We show the sampling resolution and pulse width can be enhanced by about two orders of magnitude using the signal processing algorithms presented, thus breaking the fundamental resolution limit for BPSK modulation of a particular bandwidth and bit rate. We demonstrate the usefulness of this technique for determining cloud and tree canopy thicknesses far beyond this fundamental limit in a lidar not designed for this purpose.

  8. An operational global-scale ocean thermal analysis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.

    1990-04-01

    The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less

  9. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  10. Evolution of Western Mediterranean Sea Surface Temperature between 1985 and 2005: a complementary study in situ, satellite and modelling approaches

    NASA Astrophysics Data System (ADS)

    Troupin, C.; Lenartz, F.; Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Ouberdous, M.; Beckers, J.-M.

    2009-04-01

    In order to evaluate the variability of the sea surface temperature (SST) in the Western Mediterranean Sea between 1985 and 2005, an integrated approach combining geostatistical tools and modelling techniques has been set up. The objectives are: underline the capability of each tool to capture characteristic phenomena, compare and assess the quality of their outputs, infer an interannual trend from the results. Diva (Data Interpolating Variationnal Analysis, Brasseur et al. (1996) Deep-Sea Res.) was applied on a collection of in situ data gathered from various sources (World Ocean Database 2005, Hydrobase2, Coriolis and MedAtlas2), from which duplicates and suspect values were removed. This provided monthly gridded fields in the region of interest. Heterogeneous time data coverage was taken into account by computing and removing the annual trend, provided by Diva detrending tool. Heterogeneous correlation length was applied through an advection constraint. Statistical technique DINEOF (Data Interpolation with Empirical Orthogonal Functions, Alvera-Azc

  11. A stabilized element-based finite volume method for poroelastic problems

    NASA Astrophysics Data System (ADS)

    Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo

    2018-07-01

    The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.

  12. MRI Superresolution Using Self-Similarity and Image Priors

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2010-01-01

    In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094

  13. Improved interpolation of meteorological forcings for hydrologic applications in a Swiss Alpine region

    NASA Astrophysics Data System (ADS)

    Tobin, Cara; Nicotina, Ludovico; Parlange, Marc B.; Berne, Alexis; Rinaldo, Andrea

    2011-04-01

    SummaryThis paper presents a comparative study on the mapping of temperature and precipitation fields in complex Alpine terrain. Its relevance hinges on the major impact that inadequate interpolations of meteorological forcings bear on the accuracy of hydrologic predictions regardless of the specifics of the models, particularly during flood events. Three flood events measured in the Swiss Alps are analyzed in detail to determine the interpolation methods which best capture the distribution of intense, orographically-induced precipitation. The interpolation techniques comparatively examined include: Inverse Distance Weighting (IDW), Ordinary Kriging (OK), and Kriging with External Drift (KED). Geostatistical methods rely on a robust anisotropic variogram for the definition of the spatial rainfall structure. Results indicate that IDW tends to significantly underestimate rainfall volumes whereas OK and KED methods capture spatial patterns and rainfall volumes induced by storm advection. Using numerical weather forecasts and elevation data as covariates for precipitation, we provide evidence for KED to outperform the other methods. Most significantly, the use of elevation as auxiliary information in KED of temperatures demonstrates minimal errors in estimated instantaneous rainfall volumes and provides instantaneous lapse rates which better capture snow/rainfall partitioning. Incorporation of the temperature and precipitation input fields into a hydrological model used for operational management was found to provide vastly improved outputs with respect to measured discharge volumes and flood peaks, with notable implications for flood modeling.

  14. New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda

    2014-05-01

    The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.

  15. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function.

    PubMed

    Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy; Siroux, Valérie

    2013-09-01

    Errors in address geocodes may affect estimates of the effects of air pollution on health. We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant's address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: -0.56, -6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: -0.14, -3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted -2.81 (95% CI: -0.26, -5.35) using NavTEQ, or 2.08 (95% CI -4.63, 0.47, p = 0.11) using Google Maps]. Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model.

  16. Geostatistical interpolation of hourly precipitation from rain gauges and radar for a large-scale extreme rainfall event

    NASA Astrophysics Data System (ADS)

    Haberlandt, Uwe

    2007-01-01

    SummaryThe methods kriging with external drift (KED) and indicator kriging with external drift (IKED) are used for the spatial interpolation of hourly rainfall from rain gauges using additional information from radar, daily precipitation of a denser network, and elevation. The techniques are illustrated using data from the storm period of the 10th to the 13th of August 2002 that led to the extreme flood event in the Elbe river basin in Germany. Cross-validation is applied to compare the interpolation performance of the KED and IKED methods using different additional information with the univariate reference methods nearest neighbour (NN) or Thiessen polygons, inverse square distance weighting (IDW), ordinary kriging (OK) and ordinary indicator kriging (IK). Special attention is given to the analysis of the impact of the semivariogram estimation on the interpolation performance. Hourly and average semivariograms are inferred from daily, hourly and radar data considering either isotropic or anisotropic behaviour using automatic and manual fitting procedures. The multivariate methods KED and IKED clearly outperform the univariate ones with the most important additional information being radar, followed by precipitation from the daily network and elevation, which plays only a secondary role here. The best performance is achieved when all additional information are used simultaneously with KED. The indicator-based kriging methods provide, in some cases, smaller root mean square errors than the methods, which use the original data, but at the expense of a significant loss of variance. The impact of the semivariogram on interpolation performance is not very high. The best results are obtained using an automatic fitting procedure with isotropic variograms either from hourly or radar data.

  17. Local-scale spatial modelling for interpolating climatic temperature variables to predict agricultural plant suitability

    NASA Astrophysics Data System (ADS)

    Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman

    2016-05-01

    Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.

  18. An Experimental and Analytical Investigation of Stirling Space Power Converter Heater Head

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Bartolotta, Paul; Tong, Mike; Allen, Gorden

    1995-01-01

    NASA has identified the Stirling power converter as a prime candidate for the next generation power system for space applications requiring 60000 hr of operation. To meet this long-term goal, several critical components of the power converter have been analyzed using advanced structural assessment methods. Perhaps the most critical component, because of its geometric complexity and operating environment, is the power converter's heater head. This report describes the life assessment of the heater head which includes the characterization of a viscoplastic material model, the thermal and structural analyses of the heater head, and the interpolation of fatigue and creep test results of a nickel-base superalloy, Udimet 720 LI (Low Inclusions), at several elevated temperatures for life prediction purposes.

  19. An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Lessard, Victor R.

    1990-01-01

    The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.

  20. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  1. Intensity-Curvature Measurement Approaches for the Diagnosis of Magnetic Resonance Imaging Brain Tumors.

    PubMed

    Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A

    2015-11-01

    This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.

  2. Intensity-Curvature Measurement Approaches for the Diagnosis of Magnetic Resonance Imaging Brain Tumors

    PubMed Central

    Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.

    2015-01-01

    This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943

  3. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  4. Ground and satellite based assessment of meteorological droughts: The Coello river basin case study

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, A. F.; Olaya-Marín, E. J.; Barrios, M. I.

    2017-10-01

    The spatial distribution of droughts is a key factor for designing water management policies at basin scale in arid and semi-arid regions. Ground hydro-meteorological data in neo-tropical areas are scarce; therefore, the merging of ground and satellite datasets is a promissory approach for improving our understanding of water distribution. This paper compares three monthly rainfall interpolation methods for drought evaluation. The ordinary kriging technique based on ground data, and cokriging with elevation as auxiliary variable were compared against cokriging using the Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA). Twenty rain gauge stations and the 3B42V7 version of the TMPA research dataset were considered. Comparisons were made over the Coello river basin (Colombia) at 3″ spatial resolution covering a period of eight years (1998-2005). The best spatial rainfall estimation was found for cokriging using ground data and elevation. The spatial support of TMPA dataset is very coarse for a merged interpolation with ground data, this spatial scales discrepancy highlight the need to consider scaling rules in the interpolation process.

  5. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  6. A bivariate rational interpolation with a bi-quadratic denominator

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu

    2006-10-01

    In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.

  7. Assessment of spatial distribution of fallout radionuclides through geostatistics concept.

    PubMed

    Mabit, L; Bernard, C

    2007-01-01

    After introducing geostatistics concept and its utility in environmental science and especially in Fallout Radionuclide (FRN) spatialisation, a case study for cesium-137 ((137)Cs) redistribution at the field scale using geostatistics is presented. On a Canadian agricultural field, geostatistics coupled with a Geographic Information System (GIS) was used to test three different techniques of interpolation [Ordinary Kriging (OK), Inverse Distance Weighting power one (IDW1) and two (IDW2)] to create a (137)Cs map and to establish a radioisotope budget. Following the optimization of variographic parameters, an experimental semivariogram was developed to determine the spatial dependence of (137)Cs. It was adjusted to a spherical isotropic model with a range of 30 m and a very small nugget effect. This (137)Cs semivariogram showed a good autocorrelation (R(2)=0.91) and was well structured ('nugget-to-sill' ratio of 4%). It also revealed that the sampling strategy was adequate to reveal the spatial correlation of (137)Cs. The spatial redistribution of (137)Cs was estimated by Ordinary Kriging and IDW to produce contour maps. A radioisotope budget was established for the 2.16 ha agricultural field under investigation. It was estimated that around 2 x 10(7)Bq of (137)Cs were missing (around 30% of the total initial fallout) and were exported by physical processes (runoff and erosion processes) from the area under investigation. The cross-validation analysis showed that in the case of spatially structured data, OK is a better interpolation method than IDW1 or IDW2 for the assessment of potential radioactive contamination and/or pollution.

  8. Solar resource assessment in complex orography: a comparison of available datasets for the Trentino region

    NASA Astrophysics Data System (ADS)

    Laiti, Lavinia; Giovannini, Lorenzo; Zardi, Dino

    2015-04-01

    The accurate assessment of the solar radiation available at the Earth's surface is essential for a wide range of energy-related applications, such as the design of solar power plants, water heating systems and energy-efficient buildings, as well as in the fields of climatology, hydrology, ecology and agriculture. The characterization of solar radiation is particularly challenging in complex-orography areas, where topographic shadowing and altitude effects, together with local weather phenomena, greatly increase the spatial and temporal variability of such variable. At present, approaches ranging from surface measurements interpolation to orographic down-scaling of satellite data, to numerical model simulations are adopted for mapping solar radiation. In this contribution a high-resolution (200 m) solar atlas for the Trentino region (Italy) is presented, which was recently developed on the basis of hourly observations of global radiation collected from the local radiometric stations during the period 2004-2012. Monthly and annual climatological irradiation maps were obtained by the combined use of a GIS-based clear-sky model (r.sun module of GRASS GIS) and geostatistical interpolation techniques (kriging). Moreover, satellite radiation data derived by the MeteoSwiss HelioMont algorithm (2 km resolution) were used for missing-data reconstruction and for the final mapping, thus integrating ground-based and remote-sensing information. The results are compared with existing solar resource datasets, such as the PVGIS dataset, produced by the Joint Research Center Institute for Energy and Transport, and the HelioMont dataset, in order to evaluate the accuracy of the different datasets available for the region of interest.

  9. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  10. Numerical Methods for Nonlinear Fokker-Planck Collision Operator in TEMPEST

    NASA Astrophysics Data System (ADS)

    Kerbel, G.; Xiong, Z.

    2006-10-01

    Early implementations of Fokker-Planck collision operator and moment computations in TEMPEST used low order polynomial interpolation schemes to reuse conservative operators developed for speed/pitch-angle (v, θ) coordinates. When this approach proved to be too inaccurate we developed an alternative higher order interpolation scheme for the Rosenbluth potentials and a high order finite volume method in TEMPEST (,) coordinates. The collision operator is thus generated by using the expansion technique in (v, θ) coordinates for the diffusion coefficients only, and then the fluxes for the conservative differencing are computed directly in the TEMPEST (,) coordinates. Combined with a cut-cell treatment at the turning-point boundary, this new approach is shown to have much better accuracy and conservation properties.

  11. An efficient predictor-corrector-based dynamic mesh method for multi-block structured grid with extremely large deformation and its applications

    NASA Astrophysics Data System (ADS)

    Guo, Tongqing; Chen, Hao; Lu, Zhiliang

    2018-05-01

    Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.

  12. Analysis of control system responses for aircraft stability and efficient numerical techniques for real-time simulations

    NASA Astrophysics Data System (ADS)

    Stroe, Gabriela; Andrei, Irina-Carmen; Frunzulica, Florin

    2017-01-01

    The objectives of this paper are the study and the implementation of both aerodynamic and propulsion models, as linear interpolations using look-up tables in a database. The aerodynamic and propulsion dependencies on state and control variable have been described by analytic polynomial models. Some simplifying hypotheses were made in the development of the nonlinear aircraft simulations. The choice of a certain technique to use depends on the desired accuracy of the solution and the computational effort to be expended. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. The engine power dynamic response was modeled with an additional state equation as first order lag in the actual power level response to commanded power level was computed as a function of throttle position. The number of control inputs and engine power states varied depending on the number of control surfaces and aircraft engines. The set of coupled, nonlinear, first-order ordinary differential equations that comprise the simulation model can be represented by the vector differential equation. A linear time-invariant (LTI) system representing aircraft dynamics for small perturbations about a reference trim condition is given by the state and output equations present. The gradients are obtained numerically by perturbing each state and control input independently and recording the changes in the trimmed state and output equations. This is done using the numerical technique of central finite differences, including the perturbations of the state and control variables. For a reference trim condition of straight and level flight, linearization results in two decoupled sets of linear, constant-coefficient differential equations for longitudinal and lateral / directional motion. The linearization is valid for small perturbations about the reference trim condition. Experimental aerodynamic and thrust data are used to model the applied aerodynamic and propulsion forces and moments for arbitrary states and controls. There is no closed form solution to such problems, so the equations must be solved using numerical integration. Techniques for solving this initial value problem for ordinary differential equations are employed to obtain approximate solutions at discrete points along the aircraft state trajectory.

  13. Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.

    PubMed

    Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian

    2009-04-01

    Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.

  14. A 3-D chimera grid embedding technique

    NASA Technical Reports Server (NTRS)

    Benek, J. A.; Buning, P. G.; Steger, J. L.

    1985-01-01

    A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.

  15. The GIS and Remote Sensing Technologies for Classifying Intertidal Submerged Aquatic Vegetation And Interpolating Estuarine Topobathymetry

    EPA Science Inventory

    I will present answers to the following three questions regarding “GIS techniques to assess the bathymetric distribution of existing SAV habitats” and; “Nearshore landscape characteristics that permit or prohibit SAV migration with SLR”.1. What is the tech...

  16. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  17. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  18. Ordinary kriging vs inverse distance weighting: spatial interpolation of the sessile community of Madagascar reef, Gulf of Mexico.

    PubMed

    Zarco-Perello, Salvador; Simões, Nuno

    2017-01-01

    Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ , except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use.

  19. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    PubMed

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.

  20. Ordinary kriging vs inverse distance weighting: spatial interpolation of the sessile community of Madagascar reef, Gulf of Mexico

    PubMed Central

    Simões, Nuno

    2017-01-01

    Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ, except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use. PMID:29204321

  1. Dynamic data-driven integrated flare model based on self-organized criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.

    2013-05-01

    Context. We interpret solar flares as events originating in active regions that have reached the self-organized critical state. We describe them with a dynamic integrated flare model whose initial conditions and driving mechanism are derived from observations. Aims: We investigate whether well-known scaling laws observed in the distribution functions of characteristic flare parameters are reproduced after the self-organized critical state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy, and event duration follow the expected scaling laws, we first applied the previously reported static cellular automaton model to a time series of seven solar vector magnetograms of the NOAA active region 8210 recorded by the Imaging Vector Magnetograph on May 1 1998 between 18:59 UT and 23:16 UT until the self-organized critical state was reached. We then evolved the magnetic field between these processed snapshots through spline interpolation, mimicking a natural driver in our dynamic model. We identified magnetic discontinuities that exceeded a threshold in the Laplacian of the magnetic field after each interpolation step. These discontinuities were relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent interpolation and relaxation steps covered all transitions until the end of the processed magnetograms' sequence. We additionally advanced each magnetic configuration that has reached the self-organized critical state (SOC configuration) by the static model until 50 more flares were triggered, applied the dynamic model again to the new sequence, and repeated the same process sufficiently often to generate adequate statistics. Physical requirements, such as the divergence-free condition for the magnetic field, were approximately imposed. Results: We obtain robust power laws in the distribution functions of the modeled flaring events with scaling indices that agree well with observations. Peak and total flare energy obey single power laws with indices -1.65 ± 0.11 and -1.47 ± 0.13, while the flare duration is best fitted with a double power law (-2.15 ± 0.15 and -3.60 ± 0.09 for the flatter and steeper parts, respectively). Conclusions: We conclude that well-known statistical properties of flares are reproduced after active regions reach the state of self-organized criticality. A significant enhancement of our refined cellular automaton model is that it initiates and further drives the simulation from observed evolving vector magnetograms, thus facilitating energy calculation in physical units, while a separation between MHD and kinetic timescales is possible by assigning distinct MHD timestamps to each interpolation step.

  2. High resolution frequency analysis techniques with application to the redshift experiment

    NASA Technical Reports Server (NTRS)

    Decher, R.; Teuber, D.

    1975-01-01

    High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.

  3. A deterministic width function model

    NASA Astrophysics Data System (ADS)

    Puente, C. E.; Sivakumar, B.

    Use of a deterministic fractal-multifractal (FM) geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States), that the FM approach may also be used to closely approximate existing width functions.

  4. Seasat. Volume 4: Attitude determination

    NASA Technical Reports Server (NTRS)

    Treder, A. J.

    1980-01-01

    The Seasat project was a feasibility demonstration of the use of orbital remote sensing for global ocean observation. The satellite was launched in June 1978 and was operated successfully until October 1978. A massive electrical failure occurred in the power system, terminating the mission prematurely. The actual implementation of the Seasat Attitude Determination system and the contents of the attitude data files generated by that system are documented. The deviations from plan caused by the anomalous Sun interference with horizon sensors, inflight calibration of Sun sensor head 2 alignment and horizon sensor biomass, estimation of yaw interpolation parameters, Sun and horizon sensor error sources, and yaw interpolation accuracy are included. Examples are given of flight attitude data from all modes of the Orbital Attitude Control System, of the ground processing effects on attitude data, and of cold cloud effects on pitch, and roll data.

  5. Comparison of geostatistical interpolation and remote sensing techniques for estimating long-term exposure to ambient PM2.5 concentrations across the continental United States.

    PubMed

    Lee, Seung-Jae; Serre, Marc L; van Donkelaar, Aaron; Martin, Randall V; Burnett, Richard T; Jerrett, Michael

    2012-12-01

    A better understanding of the adverse health effects of chronic exposure to fine particulate matter (PM2.5) requires accurate estimates of PM2.5 variation at fine spatial scales. Remote sensing has emerged as an important means of estimating PM2.5 exposures, but relatively few studies have compared remote-sensing estimates to those derived from monitor-based data. We evaluated and compared the predictive capabilities of remote sensing and geostatistical interpolation. We developed a space-time geostatistical kriging model to predict PM2.5 over the continental United States and compared resulting predictions to estimates derived from satellite retrievals. The kriging estimate was more accurate for locations that were about 100 km from a monitoring station, whereas the remote sensing estimate was more accurate for locations that were > 100 km from a monitoring station. Based on this finding, we developed a hybrid map that combines the kriging and satellite-based PM2.5 estimates. We found that for most of the populated areas of the continental United States, geostatistical interpolation produced more accurate estimates than remote sensing. The differences between the estimates resulting from the two methods, however, were relatively small. In areas with extensive monitoring networks, the interpolation may provide more accurate estimates, but in the many areas of the world without such monitoring, remote sensing can provide useful exposure estimates that perform nearly as well.

  6. Pycortex: an interactive surface visualizer for fMRI

    PubMed Central

    Gao, James S.; Huth, Alexander G.; Lescroart, Mark D.; Gallant, Jack L.

    2015-01-01

    Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software. PMID:26483666

  7. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the followingmore » image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard deviations in the differences between measured iodine mass in left anterior descending arteries using DSA and LA, MF, LI, or CDD were calculated. The standard deviations in the DSA-LA and DSA-MF differences (both {approx}21 mg) were approximately a factor of 3 greater than that of the DSA-LI and DSA-CDD differences (both {approx}7 mg). Local averaging and morphological filtering were considered inadequate for use in quantitative densitometry. Linear interpolation and curvature-driven diffusion image inpainting were found to be effective techniques for use with densitometry in quantifying iodine mass in vitro and in vivo. They can be used with unsubtracted images to estimate background anatomical signals and obtain accurate densitometry results. The high level of accuracy and precision in quantification associated with using LI and CDD suggests the potential of these techniques in applications where background mask images are difficult to obtain, such as lumen volume and blood flow quantification using coronary arteriography.« less

  8. An Experimental Weight Function Method for Stress Intensity Factor Calibration.

    DTIC Science & Technology

    1980-04-01

    in accuracy to the ones obtained by Macha (Reference 10) for the laser interferometry technique. The values of KI from the interpolating polynomial...Measurement. Air Force Material Laboratories, AFML-TR-74-75, July 1974. 10. D. E. Macha , W. N. Sharpe Jr., and A. F. Grandt Jr., A Laser Interferometry

  9. Visualizing 3-D microscopic specimens

    NASA Astrophysics Data System (ADS)

    Forsgren, Per-Ola; Majlof, Lars L.

    1992-06-01

    The confocal microscope can be used in a vast number of fields and applications to gather more information than is possible with a regular light microscope, in particular about depth. Compared to other three-dimensional imaging devices such as CAT, NMR, and PET, the variations of the objects studied are larger and not known from macroscopic dissections. It is therefore important to have several complementary ways of displaying the gathered information. We present a system where the user can choose display techniques such as extended focus, depth coding, solid surface modeling, maximum intensity and other techniques, some of which may be combined. A graphical user interface provides easy and direct control of all input parameters. Motion and stereo are available options. Many three- dimensional imaging devices give recordings where one dimension has different resolution and sampling than the other two which requires interpolation to obtain correct geometry. We have evaluated algorithms with interpolation in object space and in projection space. There are many ways to simplify the geometrical transformations to gain performance. We present results of some ways to simplify the calculations.

  10. How to detect and reduce movement artifacts in near-infrared imaging using moving standard deviation and spline interpolation.

    PubMed

    Scholkmann, F; Spichtig, S; Muehlemann, T; Wolf, M

    2010-05-01

    Near-infrared imaging (NIRI) is a neuroimaging technique which enables us to non-invasively measure hemodynamic changes in the human brain. Since the technique is very sensitive, the movement of a subject can cause movement artifacts (MAs), which affect the signal quality and results to a high degree. No general method is yet available to reduce these MAs effectively. The aim was to develop a new MA reduction method. A method based on moving standard deviation and spline interpolation was developed. It enables the semi-automatic detection and reduction of MAs in the data. It was validated using simulated and real NIRI signals. The results show that a significant reduction of MAs and an increase in signal quality are achieved. The effectiveness and usability of the method is demonstrated by the improved detection of evoked hemodynamic responses. The present method can not only be used in the postprocessing of NIRI signals but also for other kinds of data containing artifacts, for example ECG or EEG signals.

  11. Computational aeroelasticity using a pressure-based solver

    NASA Astrophysics Data System (ADS)

    Kamakoti, Ramji

    A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.

  12. Combinations of Earth Orientation Measurements: SPACE94, COMB94, and POLE94

    NASA Technical Reports Server (NTRS)

    Gross, Richard S.

    1996-01-01

    A Kalman filter has been used to combine independent measurements of the Earth's orientation taken by the space-geodetic observing techniques of lunar laser ranging, satellite laser ranging, very long baseline interferometry, and the Global Positioning System. Prior to their combination, the data series were adjusted to have the same bias and rate, the stated uncertainties of the measurements were adjusted, and data points considered to be outliers were deleted. The resulting combination, SPACE94, consists of smoothed, interpolated polar motion and UT1-UTC values spanning October 6, 1976, to January 27, 1995, at 1-day intervals. The Kalman filter was then used to combine the space-geodetic series comprising SPACE94 with two different, independent series of Earth orientation measurements taken by the technique of optical astrometry. Prior to their combination with SPACE94, the bias, rate and annual term of the optical astrometric series were corrected, the stated uncertainties of the measurements were adjusted, and data points considered to be outliers were deleted. The adjusted optical astrometric series were then combined with SPACE94 in two steps: (1) the Bureau International de l'Heure (BIH) optical astrometric series was combined with SPACE94 to form COMB94, a combined series of smoothed, interpolated polar motion and UT1-UTC values spanning January 20, 1962, to January 27, 1995, at 5-day intervals, and (2) the International Latitude Service (ILS) optical astrometric series was combined with COMB94 to form POLE94, a combined series of smoothed, interpolated polar motion values spanning January 20, 1900, to January 21, 1995, at 30.4375-day intervals.

  13. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  14. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  15. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  16. 3D reconstruction techniques made easy: know-how and pictures.

    PubMed

    Luccichenti, Giacomo; Cademartiri, Filippo; Pezzella, Francesca Romana; Runza, Giuseppe; Belgrano, Manuel; Midiri, Massimo; Sabatini, Umberto; Bastianello, Stefano; Krestin, Gabriel P

    2005-10-01

    Three-dimensional reconstructions represent a visual-based tool for illustrating the basis of three-dimensional post-processing such as interpolation, ray-casting, segmentation, percentage classification, gradient calculation, shading and illumination. The knowledge of the optimal scanning and reconstruction parameters facilitates the use of three-dimensional reconstruction techniques in clinical practise. The aim of this article is to explain the principles of multidimensional image processing in a pictorial way and the advantages and limitations of the different possibilities of 3D visualisation.

  17. Fundamental techniques for resolution enhancement of average subsampled images

    NASA Astrophysics Data System (ADS)

    Shen, Day-Fann; Chiu, Chui-Wen

    2012-07-01

    Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.

  18. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  19. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  20. Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo

    NASA Astrophysics Data System (ADS)

    Kosempa, M.; Chambers, D. P.

    2016-02-01

    Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?

  1. EOS Interpolation and Thermodynamic Consistency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammel, J. Tinka

    2015-11-16

    As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.

  2. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    PubMed

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  3. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  4. Rigid-Cluster Models of Conformational Transitions in Macromolecular Machines and Assemblies

    PubMed Central

    Kim, Moon K.; Jernigan, Robert L.; Chirikjian, Gregory S.

    2005-01-01

    We present a rigid-body-based technique (called rigid-cluster elastic network interpolation) to generate feasible transition pathways between two distinct conformations of a macromolecular assembly. Many biological molecules and assemblies consist of domains which act more or less as rigid bodies during large conformational changes. These collective motions are thought to be strongly related with the functions of a system. This fact encourages us to simply model a macromolecule or assembly as a set of rigid bodies which are interconnected with distance constraints. In previous articles, we developed coarse-grained elastic network interpolation (ENI) in which, for example, only Cα atoms are selected as representatives in each residue of a protein. We interpolate distance differences of two conformations in ENI by using a simple quadratic cost function, and the feasible conformations are generated without steric conflicts. Rigid-cluster interpolation is an extension of the ENI method with rigid-clusters replacing point masses. Now the intermediate conformations in an anharmonic pathway can be determined by the translational and rotational displacements of large clusters in such a way that distance constraints are observed. We present the derivation of the rigid-cluster model and apply it to a variety of macromolecular assemblies. Rigid-cluster ENI is then modified for a hybrid model represented by a mixture of rigid clusters and point masses. Simulation results show that both rigid-cluster and hybrid ENI methods generate sterically feasible pathways of large systems in a very short time. For example, the HK97 virus capsid is an icosahedral symmetric assembly composed of 60 identical asymmetric units. Its original Hessian matrix size for a Cα coarse-grained model is >(300,000)2. However, it reduces to (84)2 when we apply the rigid-cluster model with icosahedral symmetry constraints. The computational cost of the interpolation no longer scales heavily with the size of structures; instead, it depends strongly on the minimal number of rigid clusters into which the system can be decomposed. PMID:15833998

  5. Satellite radiothermovision of atmospheric mesoscale processes: case study of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Ermakov, D. M.; Sharkov, E. A.; Chernushich, A. P.

    2015-04-01

    Satellite radiothermovision is a set of processing techniques applicable for multisource data of radiothermal monitoring of oceanatmosphere system, which allows creating dynamic description of mesoscale and synoptic atmospheric processes and estimating physically meaningful integral characteristics of the observed processes (like avdective flow of the latent heat through a given border). The approach is based on spatiotemporal interpolation of the satellite measurements which allows reconstructing the radiothermal fields (as well as the fields of geophysical parameters) of the ocean-atmosphere system at global scale with spatial resolution of about 0.125° and temporal resolution of 1.5 hour. The accuracy of spatiotemporal interpolation was estimated by direct comparison of interpolated data with the data of independent asynchronous measurements and was shown to correspond to the best achievable as reported in literature (for total precipitable water fields the accuracy is about 0.8 mm). The advantages of the implemented interpolation scheme are: closure under input radiothermal data, homogeneity in time scale (all data are interpolated through the same time intervals), automatic estimation of both the intermediate states of scalar field of the studied geophysical parameter and of vector field of effective velocity of advection (horizontal movements). Using this pair of fields one can calculate the flow of a given geophysical quantity though any given border. For example, in case of total precipitable water field, this flow (under proper calibration) has the meaning of latent heat advective flux. This opportunity was used to evaluate the latent heat flux though a set of circular contours, enclosing a tropical cyclone and drifting with it during its evolution. A remarkable interrelation was observed between the calculated magnitude and sign of advective latent flux and the intensity of a tropical cyclone. This interrelation is demonstrated in several examples of hurricanes and tropical cyclones of August, 2000, and typhoons of November, 2013, including super typhoon Haiyan.

  6. Light sterile neutrinos and inflationary freedom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gariazzo, S.; Giunti, C.; Laveder, M., E-mail: gariazzo@to.infn.it, E-mail: giunti@to.infn.it, E-mail: laveder@pd.infn.it

    2015-04-01

    We perform a cosmological analysis in which we allow the primordial power spectrum of scalar perturbations to assume a shape that is different from the usual power-law predicted by the simplest models of cosmological inflation. We parameterize the free primordial power spectrum with a ''piecewise cubic Hermite interpolating polynomial'' (PCHIP). We consider a 3+1 neutrino mixing model with a sterile neutrino having a mass at the eV scale, which can explain the anomalies observed in short-baseline neutrino oscillation experiments. We find that the freedom of the primordial power spectrum allows to reconcile the cosmological data with a fully thermalized sterilemore » neutrino in the early Universe. Moreover, the cosmological analysis gives us some information on the shape of the primordial power spectrum, which presents a feature around the wavenumber k=0.002 Mpc{sup −1}.« less

  7. Assessment of metal artifact reduction methods in pelvic CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdoli, Mehrsima; Mehranian, Abolfazl; Ailianou, Angeliki

    2016-04-15

    Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulatedmore » datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.« less

  8. A Project-Based Laboratory for Learning Embedded System Design with Industry Support

    ERIC Educational Resources Information Center

    Lee, Chyi-Shyong; Su, Juing-Huei; Lin, Kuo-En; Chang, Jia-Hao; Lin, Gu-Hong

    2010-01-01

    A project-based laboratory for learning embedded system design with support from industry is presented in this paper. The aim of this laboratory is to motivate students to learn the building blocks of embedded systems and practical control algorithms by constructing a line-following robot using the quadratic interpolation technique to predict the…

  9. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  10. Assessment and Optimization of the Accuracy of an Aircraft-Based Technique Used to Quantify Greenhouse Gas Emission Rates from Point Sources

    NASA Astrophysics Data System (ADS)

    Shepson, P. B.; Lavoie, T. N.; Kerlo, A. E.; Stirm, B. H.

    2016-12-01

    Understanding the contribution of anthropogenic activities to atmospheric greenhouse gas concentrations requires an accurate characterization of emission sources. Previously, we have reported the use of a novel aircraft-based mass balance measurement technique to quantify greenhouse gas emission rates from point and area sources, however, the accuracy of this approach has not been evaluated to date. Here, an assessment of method accuracy and precision was performed by conducting a series of six aircraft-based mass balance experiments at a power plant in southern Indiana and comparing the calculated CO2 emission rates to the reported hourly emission measurements made by continuous emissions monitoring systems (CEMS) installed directly in the exhaust stacks at the facility. For all flights, CO2 emissions were quantified before CEMS data were released online to ensure unbiased analysis. Additionally, we assess the uncertainties introduced to the final emission rate caused by our analysis method, which employs a statistical kriging model to interpolate and extrapolate the CO2 fluxes across the flight transects from the ground to the top of the boundary layer. Subsequently, using the results from these flights combined with the known emissions reported by the CEMS, we perform an inter-model comparison of alternative kriging methods to evaluate the performance of the kriging approach.

  11. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  12. Determination of stratospheric temperature and height gradients from nimbus 3 radiation data

    NASA Technical Reports Server (NTRS)

    Nicholas, G. W.; Hovland, D. N.; Belmont, A. D.

    1971-01-01

    To improve the specification of stratospheric horizontal temperature and geopotential height fields from satellite radiation data, needed for high flying aircraft, a technique was derived to estimate data between satellite tracks using interpolated IRIS 15-micron data from Nimbus III. The interpolation is based on the observed gradients of the MRIR 15-micron radiances between subsatellite tracks. The technique was verified with radiosonde data taken within 6 hours of the satellite data. The sample varied from 1126 pairs at low levels to 383 pairs at 10 mb using northern hemisphere data for June 15 to July 20, 1969. The data were separated into five latitude bands. The Rms temperature differences were generally from 2 to 5 C for all levels above 300 mb. From 500 to 300 mb RMS differences vary from 4 to 9C except at high latitudes which show values near 3C. The RMS differences between radiosonde heights and those calculated hydrostatically from the surface were from 30 to 280 meters increasing from the surface to 10 mb. Integration starting at 100 mb reduced the RMS difference in the stratosphere to 20 to 120 meters from 70 to 10 mb. From a comparison with actual operational maps at 50 and 10 mb, it appears the techniques developed produce analyses in general agreement with those from radiosonde data. In addition, they are able to indicate details over areas of sparse data not shown by conventional techniques.

  13. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  14. Testing methods to produce landscape-scale presettlement vegetation maps from the U.S. public land survey records

    USGS Publications Warehouse

    Manies, K.L.; Mladenoff, D.J.

    2000-01-01

    The U.S. Public Land Survey (PLS) notebooks are one of the best records of the pre-European settlement landscape and are widely used to recreate presettlement vegetation maps. The purpose of this study was to evaluate the relative ability of several interpolation techniques to map this vegetation, as sampled by the PLS surveyors, at the landscape level. Field data from Sylvania Wilderness Area, MI (U.S.A.), sampled at the same scale as the PLS data, were used for this test. Sylvania is comprised of a forested landscape similar to that present during presettlement times. Data were analyzed using two Arc/Info interpolation processes and indicator kriging. The resulting maps were compared to a 'correct' map of Sylvania, which was classified from aerial photographs. We found that while the interpolation methods used accurately estimated the relative forest composition of the landscape and the order of dominance of different vegetation types, they were unable to accurately estimate the actual area occupied by each vegetation type. Nor were any of the methods we tested able to recreate the landscape patterns found in the natural landscape. The most likely cause for these inabilities is the scale at which the field data (and hence the PLS data) were recorded. Therefore, these interpolation methods should not be used with the PLS data to recreate pre-European settlement vegetation at small scales (e.g., less than several townships or areas < 104 ha). Recommendations are given for ways to increase the accuracy of these vegetation maps.

  15. Super-resolution reconstruction for 4D computed tomography of the lung via the projections onto convex sets approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei

    2014-11-01

    Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less

  16. Topsoil pollution forecasting using artificial neural networks on the example of the abnormally distributed heavy metal at Russian subarctic

    NASA Astrophysics Data System (ADS)

    Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.

    2017-06-01

    Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.

  17. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  18. A Framework for Linear and Non-Linear Registration of Diffusion-Weighted MRIs Using Angular Interpolation

    PubMed Central

    Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe

    2013-01-01

    Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381

  19. Interpolating of climate data using R

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja

    2017-04-01

    Interpolation methods are used in many different geoscientific areas, such as soil physics, climatology and meteorology. Thereby, unknown values are calculated by using statistical calculation approaches applied on known values. So far, the majority of climatologists have been using computer languages, such as FORTRAN or C++, but there is also an increasing number of climate scientists using R for data processing and visualization. Most of them, however, are still working with arrays and vector based data which is often associated with complex R code structures. For the presented study, I have decided to convert the climate data into geodata and to perform the whole data processing using the raster package, gstat and similar packages, providing a much more comfortable way for data handling. A central goal of my approach is to create an easy to use, powerful and fast R script, implementing the entire geodata processing and visualization into a single and fully automated R based procedure, which allows avoiding the necessity of using other software packages, such as ArcGIS or QGIS. Thus, large amount of data with recurrent process sequences can be processed. The aim of the presented study, which is located in western Central Asia, is to interpolate wind data based on the European reanalysis data Era-Interim, which are available as raster data with a resolution of 0.75˚ x 0.75˚ , to a finer grid. Therefore, various interpolation methods are used: inverse distance weighting, the geostatistical methods ordinary kriging and regression kriging, generalized additve model and the machine learning algorithms support vector machine and neural networks. Besides the first two mentioned methods, the methods are used with influencing factors, e.g. geopotential and topography.

  20. A Framework for Linear and Non-Linear Registration of Diffusion-Weighted MRIs Using Angular Interpolation.

    PubMed

    Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe

    2013-01-01

    Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.

  1. A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation

    NASA Astrophysics Data System (ADS)

    Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao

    2015-03-01

    Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.

  2. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  3. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Tracking the dispersion of Scaphoideus titanus Ball (Hemiptera: Cicadellidae) from wild to cultivated grapevine: use of a novel mark-capture technique.

    PubMed

    Lessio, F; Tota, F; Alma, A

    2014-08-01

    The dispersion of Scaphoideus titanus Ball adults from wild to cultivated grapevines was studied using a novel mark-capture technique. The crowns of wild grapevines located at a distance from vineyards ranging from 5 to 330 m were sprayed with a water solution of either cow milk (marker: casein) or chicken egg whites (marker: albumin) and insects captured in yellow sticky traps placed on the canopy of grapes were analyzed via an indirect ELISA for markers' identification. Data were subject to exponential regression as a function of distance from wild grapevine, and to spatial interpolation (Inverse Distance Weighted and Kernel interpolation with barriers) using ArcGIS Desktop 10.1 software. The influence of rainfall and time elapsed after marking on markers' effectiveness, and the different dispersion of males and females were studied with regression analyses. Of a total of 5417 insects analyzed, 43% were positive to egg; whereas 18% of 536 tested resulted marked with milk. No influence of rainfall or time elapsed was observed for egg, whereas milk was affected by time. Males and females showed no difference in dispersal. Marked adults decreased exponentially along with distance from wild grapevine and up to 80% of them were captured within 30 m. However, there was evidence of long-range dispersal up to 330 m. The interpolation maps showed a clear clustering of marked S. titanus close to the treated wild grapevine, and the pathways to the vineyards did not always seem to go along straight lines but mainly along ecological corridors. S. titanus adults are therefore capable of dispersing from wild to cultivated grapevine, and this may affect pest management strategies.

  5. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  6. Monotonicity preserving splines using rational cubic Timmer interpolation

    NASA Astrophysics Data System (ADS)

    Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md

    2017-08-01

    In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.

  7. Interpolatability distinguishes LOCC from separable von Neumann measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, Andrew M.; Leung, Debbie; Mančinska, Laura

    2013-11-15

    Local operations with classical communication (LOCC) and separable operations are two classes of quantum operations that play key roles in the study of quantum entanglement. Separable operations are strictly more powerful than LOCC, but no simple explanation of this phenomenon is known. We show that, in the case of von Neumann measurements, the ability to interpolate measurements is an operational principle that sets apart LOCC and separable operations.

  8. Use of ordinary kriging and Gaussian conditional simulation to interpolate airborne fire radiative energy density estimates

    Treesearch

    C. Klauberg; A. T. Hudak; B. C. Bright; L. Boschetti; M. B. Dickinson; R. L. Kremens; C. A. Silva

    2018-01-01

    Fire radiative energy density (FRED, J m-2) integrated from fire radiative power density (FRPD, W m-2) observations of landscape-level fires can present an undersampling problem when collected from fixed-wing aircraft. In the present study, the aircraft made multiple passes over the fire at ~3 min intervals, thus failing to observe most of the FRPD emitted as the flame...

  9. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  10. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  11. Wave refraction diagrams for the Baltimore Canyon region of the mid-Atlantic continental shelf computed by using three bottom topography approximation techniques

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1976-01-01

    The Langley Research Center and Virginia Institute of Marine Science wave refraction computer model was applied to the Baltimore Canyon region of the mid-Atlantic continental shelf. Wave refraction diagrams for a wide range of normally expected wave periods and directions were computed by using three bottom topography approximation techniques: quadratic least squares, cubic least squares, and constrained bicubic interpolation. Mathematical or physical interpretation of certain features appearing in the computed diagrams is discussed.

  12. On the design of decoupling controllers for advanced rotorcraft in the hover case

    NASA Technical Reports Server (NTRS)

    Fan, M. K. H.; Tits, A.; Barlow, J.; Tsing, N. K.; Tischler, M.; Takahashi, M.

    1991-01-01

    A methodology for design of helicopter control systems is proposed that can account for various types of concurrent specifications: stability, decoupling between longitudinal and lateral motions, handling qualities, and physical limitations of the swashplate motions. This is achieved by synergistic use of analytical techniques (Q-parameterization of all stabilizing controllers, transfer function interpolation) and advanced numerical optimization techniques. The methodology is used to design a controller for the UH-60 helicopter in hover. Good results are achieved for decoupling and handling quality specifications.

  13. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  14. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  15. Error Estimation in an Optimal Interpolation Scheme for High Spatial and Temporal Resolution SST Analyses

    NASA Technical Reports Server (NTRS)

    Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn

    2010-01-01

    Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.

  16. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  17. Data-driven in computational plasticity

    NASA Astrophysics Data System (ADS)

    Ibáñez, R.; Abisset-Chavanne, E.; Cueto, E.; Chinesta, F.

    2018-05-01

    Computational mechanics is taking an enormous importance in industry nowadays. On one hand, numerical simulations can be seen as a tool that allows the industry to perform fewer experiments, reducing costs. On the other hand, the physical processes that are intended to be simulated are becoming more complex, requiring new constitutive relationships to capture such behaviors. Therefore, when a new material is intended to be classified, an open question still remains: which constitutive equation should be calibrated. In the present work, the use of model order reduction techniques are exploited to identify the plastic behavior of a material, opening an alternative route with respect to traditional calibration methods. Indeed, the main objective is to provide a plastic yield function such that the mismatch between experiments and simulations is minimized. Therefore, once the experimental results just like the parameterization of the plastic yield function are provided, finding the optimal plastic yield function can be seen either as a traditional optimization or interpolation problem. It is important to highlight that the dimensionality of the problem is equal to the number of dimensions related to the parameterization of the yield function. Thus, the use of sparse interpolation techniques seems almost compulsory.

  18. Reducible dictionaries for single image super-resolution based on patch matching and mean shifting

    NASA Astrophysics Data System (ADS)

    Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza

    2017-03-01

    A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.

  19. Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data

    NASA Technical Reports Server (NTRS)

    Bose, Tamal

    2000-01-01

    A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.

  20. CUDA GPU based full-Stokes finite difference modelling of glaciers

    NASA Astrophysics Data System (ADS)

    Brædstrup, C. F.; Egholm, D. L.

    2012-04-01

    Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.

  1. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  2. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising

    PubMed Central

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-01-01

    Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886

  3. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.

    PubMed

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-05-16

    Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.

  4. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  5. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.

    2003-01-01

    The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.

  6. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  7. Estimating Global Seafloor Total Organic Carbon Using a Machine Learning Technique and Its Relevance to Methane Hydrates

    NASA Astrophysics Data System (ADS)

    Lee, T. R.; Wood, W. T.; Dale, J.

    2017-12-01

    Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.

  8. Surface electric fields for North America during historical geomagnetic storms

    USGS Publications Warehouse

    Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.

    2013-01-01

    To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.

  9. Personal computer (PC) based image processing applied to fluid mechanics research

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processsed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes commputation.

  10. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  11. An Interpolation Method for Obtaining Thermodynamic Properties Near Saturated Liquid and Saturated Vapor Lines

    NASA Technical Reports Server (NTRS)

    Nguyen, Huy H.; Martin, Michael A.

    2004-01-01

    The two most common approaches used to formulate thermodynamic properties of pure substances are fundamental (or characteristic) equations of state (Helmholtz and Gibbs functions) and a piecemeal approach that is described in Adebiyi and Russell (1992). This paper neither presents a different method to formulate thermodynamic properties of pure substances nor validates the aforementioned approaches. Rather its purpose is to present a method to generate property tables from existing property packages and a method to facilitate the accurate interpretation of fluid thermodynamic property data from those tables. There are two parts to this paper. The first part of the paper shows how efficient and usable property tables were generated, with the minimum number of data points, using an aerospace industry standard property package. The second part describes an innovative interpolation technique that has been developed to properly obtain thermodynamic properties near the saturated liquid and saturated vapor lines.

  12. Spatiotemporal video deinterlacing using control grid interpolation

    NASA Astrophysics Data System (ADS)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  13. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  14. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  15. Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.

    2017-05-01

    The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.

  16. Ball-morph: definition, implementation, and comparative evaluation.

    PubMed

    Whited, Brian; Rossignac, Jaroslaw Jarek

    2011-06-01

    We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.

  17. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  18. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  19. a Climatology of Global Precipitation.

    NASA Astrophysics Data System (ADS)

    Legates, David Russell

    A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.

  20. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  1. The moving-least-squares-particle hydrodynamics method (MLSPH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less

  2. Optimized blind gamma-ray pulsar searches at fixed computing budget

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less

  3. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  4. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  5. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  6. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.

  7. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  8. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  9. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  10. Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.

    PubMed

    Zhang, Hua; Sonke, Jan-Jakob

    2013-01-01

    Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.

  11. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  12. Stress wave velocity patterns in the longitudinal-radial plane of trees for defect diagnosis

    Treesearch

    Guanghui Li; Xiang Weng; Xiaocheng Du; Xiping Wang; Hailin Feng

    2016-01-01

    Acoustic tomography for urban tree inspection typically uses stress wave data to reconstruct tomographic images for the trunk cross section using interpolation algorithm. This traditional technique does not take into account the stress wave velocity patterns along tree height. In this study, we proposed an analytical model for the wave velocity in the longitudinal–...

  13. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  14. Estimating the atmospheric boundary layer height over sloped, forested terrain from surface spectral analysis during BEARPEX

    NASA Astrophysics Data System (ADS)

    Choi, W.; Faloona, I. C.; McKay, M.; Goldstein, A. H.; Baker, B.

    2010-11-01

    In this study the atmospheric boundary layer (ABL) height (zi) over complex, forested terrain is estimated based on the power spectra and the integral length scale of horizontal winds obtained from a three-axis sonic anemometer during the BEARPEX (Biosphere Effects on Aerosol and Photochemistry) Experiment. The zi values estimated with this technique showed very good agreement with observations obtained from balloon tether sonde (2007) and rawinsonde (2009) measurements under unstable conditions (z/L < 0) at the coniferous forest in the California Sierra Nevada. The behavior of the nocturnal boundary layer height (h) and power spectra of lateral winds and temperature under stable conditions (z/L > 0) is also presented. The nocturnal boundary layer height is found to be fairly well predicted by a recent interpolation formula proposed by Zilitinkevich et al. (2007), although it was observed to only vary from 60-80 m during the experiment. Finally, significant directional wind shear was observed during both day and night with winds backing from the prevailing west-southwesterlies in the ABL (anabatic cross-valley circulation) to consistent southerlies in a layer ~1 km thick just above the ABL before veering to the prevailing westerlies further aloft. We show that this is consistent with the forcing of a thermal wind driven by the regional temperature gradient directed due east in the lower troposphere.

  15. Feasibility and validation of virtual autopsy for dental identification using the Interpol dental codes.

    PubMed

    Franco, Ademir; Thevissen, Patrick; Coudyzer, Walter; Develter, Wim; Van de Voorde, Wim; Oyen, Raymond; Vandermeulen, Dirk; Jacobs, Reinhilde; Willems, Guy

    2013-05-01

    Virtual autopsy is a medical imaging technique, using full body computed tomography (CT), allowing for a noninvasive and permanent observation of all body parts. For dental identification clinically and radiologically observed ante-mortem (AM) and post-mortem (PM) oral identifiers are compared. The study aimed to verify if a PM dental charting can be performed on virtual reconstructions of full-body CT's using the Interpol dental codes. A sample of 103 PM full-body CT's was collected from the forensic autopsy files of the Department of Forensic Medicine University Hospitals, KU Leuven, Belgium. For validation purposes, 3 of these bodies underwent a complete dental autopsy, a dental radiological and a full-body CT examination. The bodies were scanned in a Siemens Definition Flash CT Scanner (Siemens Medical Solutions, Germany). The images were examined on 8- and 12-bit screen resolution as three-dimensional (3D) reconstructions and as axial, coronal and sagittal slices. InSpace(®) (Siemens Medical Solutions, Germany) software was used for 3D reconstruction. The dental identifiers were charted on pink PM Interpol forms (F1, F2), using the related dental codes. Optimal dental charting was obtained by combining observations on 3D reconstructions and CT slices. It was not feasible to differentiate between different kinds of dental restoration materials. The 12-bit resolution enabled to collect more detailed evidences, mainly related to positions within a tooth. Oral identifiers, not implemented in the Interpol dental coding were observed. Amongst these, the observed (3D) morphological features of dental and maxillofacial structures are important identifiers. The latter can become particularly more relevant towards the future, not only because of the inherent spatial features, yet also because of the increasing preventive dental treatment, and the decreasing application of dental restorations. In conclusion, PM full-body CT examinations need to be implemented in the PM dental charting protocols and the Interpol dental codes should be adapted accordingly. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Hermite-Birkhoff interpolation in the nth roots of unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.

    1980-06-01

    Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelson, R.; Smith, K. L.; Mushotzky, R.

    We present time series analyses of the full Kepler data set of Zw 229–15. This Kepler light curve—with a baseline greater than 3 yr, composed of virtually continuous, evenly sampled 30 minute measurements—is unprecedented in its quality and precision. We utilize two methods of power spectral analysis to investigate the optical variability and search for evidence of a bend frequency associated with a characteristic optical variability timescale. Each method yields similar results. The first interpolates across data gaps to use the standard Fourier periodogram. The second, using the CARMA-based time-domain modeling technique of Kelly et al., does not need evenlymore » sampled data. Both methods find excess power at high frequencies that may be due to Kepler instrumental effects. More importantly, both also show strong bends (Δα ∼ 2) at timescales of ∼5 days, a feature similar to those seen in the X-ray power spectral densities of active galactic nuclei (AGNs) but never before in the optical. This observed ∼5 day timescale may be associated with one of several physical processes potentially responsible for the variability. A plausible association could be made with light-crossing dynamical or thermal timescales depending on the assumed value of the accretion disk size and on unobserved disk parameters such as α and H/R. This timescale is not consistent with the viscous timescale, which would be years in a ∼10{sup 7} M {sub ☉} AGN such as Zw 229–15. However, there must be a second bend on long (≳ 1 yr) timescales and that feature could be associated with the viscous timescale.« less

  18. Formulation of the rotational transformation of wave fields and their application to digital holography.

    PubMed

    Matsushima, Kyoji

    2008-07-01

    Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.

  19. Uncertainty estimates of altimetric Global Mean Sea Level timeseries

    NASA Astrophysics Data System (ADS)

    Scharffenberg, Martin; Hemming, Michael; Stammer, Detlef

    2016-04-01

    An attempt is being presented concerned with providing uncertainty measures for global mean sea level time series. For this purpose sea surface height (SSH) fields, simulated by the high resolution STORM/NCEP model for the period 1993 - 2010, were subsampled along altimeter tracks and processed similar to techniques used by five working groups to estimate GMSL. Results suggest that the spatial and temporal resolution have a substantial impact on GMSL estimates. Major impacts can especially result from the interpolation technique or the treatment of SSH outliers and easily lead to artificial temporal variability in the resulting time series.

  20. Automated mapping of the ocean floor using the theory of intrinsic random functions of order k

    USGS Publications Warehouse

    David, M.; Crozel, D.; Robb, James M.

    1986-01-01

    High-quality contour maps can be computer drawn from single track echo-sounding data by combining Universal Kriging and the theory of intrinsic random function of order K (IRFK). These methods interpolate values among the closely spaced points that lie along relatively widely spaced lines. The technique provides a variance which can be contoured as a quantitative measure of map precision. The technique can be used to evaluate alternative survey trackline configurations and data collection intervals, and can be applied to other types of oceanographic data. ?? 1986 D. Reidel Publishing Company.

  1. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  2. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  3. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  4. Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media

    DTIC Science & Technology

    2015-09-24

    TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR

  5. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  6. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  7. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  8. Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data.

    PubMed

    de Cheveigné, Alain; Arzounian, Dorothée

    2018-05-15

    Electroencephalography (EEG), magnetoencephalography (MEG) and related techniques are prone to glitches, slow drift, steps, etc., that contaminate the data and interfere with the analysis and interpretation. These artifacts are usually addressed in a preprocessing phase that attempts to remove them or minimize their impact. This paper offers a set of useful techniques for this purpose: robust detrending, robust rereferencing, outlier detection, data interpolation (inpainting), step removal, and filter ringing artifact removal. These techniques provide a less wasteful alternative to discarding corrupted trials or channels, and they are relatively immune to artifacts that disrupt alternative approaches such as filtering. Robust detrending allows slow drifts and common mode signals to be factored out while avoiding the deleterious effects of glitches. Robust rereferencing reduces the impact of artifacts on the reference. Inpainting allows corrupt data to be interpolated from intact parts based on the correlation structure estimated over the intact parts. Outlier detection allows the corrupt parts to be identified. Step removal fixes the high-amplitude flux jump artifacts that are common with some MEG systems. Ringing removal allows the ringing response of the antialiasing filter to glitches (steps, pulses) to be suppressed. The performance of the methods is illustrated and evaluated using synthetic data and data from real EEG and MEG systems. These methods, which are mainly automatic and require little tuning, can greatly improve the quality of the data. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  10. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  11. On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians

    NASA Astrophysics Data System (ADS)

    Valverde, Clodoaldo; Baseia, Basílio

    2018-01-01

    We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.

  12. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  13. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  14. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  15. Asynchronous signal-dependent non-uniform sampler

    NASA Astrophysics Data System (ADS)

    Can-Cimino, Azime; Chaparro, Luis F.; Sejdić, Ervin

    2014-05-01

    Analog sparse signals resulting from biomedical and sensing network applications are typically non-stationary with frequency-varying spectra. By ignoring that the maximum frequency of their spectra is changing, uniform sampling of sparse signals collects unnecessary samples in quiescent segments of the signal. A more appropriate sampling approach would be signal-dependent. Moreover, in many of these applications power consumption and analog processing are issues of great importance that need to be considered. In this paper we present a signal dependent non-uniform sampler that uses a Modified Asynchronous Sigma Delta Modulator which consumes low-power and can be processed using analog procedures. Using Prolate Spheroidal Wave Functions (PSWF) interpolation of the original signal is performed, thus giving an asynchronous analog to digital and digital to analog conversion. Stable solutions are obtained by using modulated PSWFs functions. The advantage of the adapted asynchronous sampler is that range of frequencies of the sparse signal is taken into account avoiding aliasing. Moreover, it requires saving only the zero-crossing times of the non-uniform samples, or their differences, and the reconstruction can be done using their quantized values and a PSWF-based interpolation. The range of frequencies analyzed can be changed and the sampler can be implemented as a bank of filters for unknown range of frequencies. The performance of the proposed algorithm is illustrated with an electroencephalogram (EEG) signal.

  16. An analytical technique for predicting the characteristics of a flexible wing equipped with an active flutter-suppression system and comparison with wind-tunnel data

    NASA Technical Reports Server (NTRS)

    Abel, I.

    1979-01-01

    An analytical technique for predicting the performance of an active flutter-suppression system is presented. This technique is based on the use of an interpolating function to approximate the unsteady aerodynamics. The resulting equations are formulated in terms of linear, ordinary differential equations with constant coefficients. This technique is then applied to an aeroelastic model wing equipped with an active flutter-suppression system. Comparisons between wind-tunnel data and analysis are presented for the wing both with and without active flutter suppression. Results indicate that the wing flutter characteristics without flutter suppression can be predicted very well but that a more adequate model of wind-tunnel turbulence is required when the active flutter-suppression system is used.

  17. Introduction of a hybrid treatment delivery system used for quality assurance in multi-catheter interstitial brachytherapy

    NASA Astrophysics Data System (ADS)

    Kallis, Karoline; Kreppner, Stephan; Lotter, Michael; Fietkau, Rainer; Strnad, Vratislav; Bert, Christoph

    2018-05-01

    Multi-catheter interstitial brachytherapy (iBT) is a treatment option for breast cancer patients after breast conserving surgery. Typically, only a few additional quality interventions after the first irradiation have been introduced to ensure the planned treatment delivery. Therefore, the purpose of this study is to show the possibilities of an electromagnetic tracking (EMT) system integrated into the afterloader for quality assurance (QA) in high-dose rate (HDR) iBT of patients with breast cancer. The hybrid afterloader system equipped with an electromagnetic sensor was used for all phantom and patient measurements. Phantom measurements were conducted to estimate the quality of different evaluation schemes. After a coherent point drift registration of the EMT traces to the reconstructed catheters based on computed tomograms the dwell positions (DP) were defined. Different fitting and interpolation methods were analyzed for the reconstruction of DPs. All estimated DPs were compared to the DPs defined in treatment planning. Until now, the implant geometry of 20 patients treated with HDR brachytherapy was acquired and explored. Regarding the reconstruction techniques, both fitting and interpolation were able to detect manually introduced shifts and swaps. Nonetheless, interpolation showed superior results (RMSE  =  1.27 mm), whereas fitting seemed to be more stable to distortion and motion. The EMT system proved to be beneficial for QA in brachytherapy and furthermore, clinical feasibility was proven.

  18. Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014

    NASA Astrophysics Data System (ADS)

    Junod, R.; Christy, J. R.

    2016-12-01

    Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).

  19. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Murman, S. M.; Berger, M. J.

    2003-01-01

    This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.

  20. BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization

    NASA Astrophysics Data System (ADS)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-06-01

    Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.

  1. A gap-filling model for eddy covariance latent heat flux: Estimating evapotranspiration of a subtropical seasonal evergreen broad-leaved forest as an example

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ying; Chu, Chia-Ren; Li, Ming-Hsu

    2012-10-01

    SummaryIn this paper we present a semi-parametric multivariate gap-filling model for tower-based measurement of latent heat flux (LE). Two statistical techniques, the principal component analysis (PCA) and a nonlinear interpolation approach were integrated into this LE gap-filling model. The PCA was first used to resolve the multicollinearity relationships among various environmental variables, including radiation, soil moisture deficit, leaf area index, wind speed, etc. Two nonlinear interpolation methods, multiple regressions (MRS) and the K-nearest neighbors (KNNs) were examined with random selected flux gaps for both clear sky and nighttime/cloudy data to incorporate into this LE gap-filling model. Experimental results indicated that the KNN interpolation approach is able to provide consistent LE estimations while MRS presents over estimations during nighttime/cloudy. Rather than using empirical regression parameters, the KNN approach resolves the nonlinear relationship between the gap-filled LE flux and principal components with adaptive K values under different atmospheric states. The developed LE gap-filling model (PCA with KNN) works with a RMSE of 2.4 W m-2 (˜0.09 mm day-1) at a weekly time scale by adding 40% artificial flux gaps into original dataset. Annual evapotranspiration at this study site were estimated at 736 mm (1803 MJ) and 728 mm (1785 MJ) for year 2008 and 2009, respectively.

  2. Meshless Method with Operator Splitting Technique for Transient Nonlinear Bioheat Transfer in Two-Dimensional Skin Tissues

    PubMed Central

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-01

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180

  3. Meshless method with operator splitting technique for transient nonlinear bioheat transfer in two-dimensional skin tissues.

    PubMed

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-16

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.

  4. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, T; Koo, T

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  5. Introduction to Geostatistics

    NASA Astrophysics Data System (ADS)

    Kitanidis, P. K.

    1997-05-01

    Introduction to Geostatistics presents practical techniques for engineers and earth scientists who routinely encounter interpolation and estimation problems when analyzing data from field observations. Requiring no background in statistics, and with a unique approach that synthesizes classic and geostatistical methods, this book offers linear estimation methods for practitioners and advanced students. Well illustrated with exercises and worked examples, Introduction to Geostatistics is designed for graduate-level courses in earth sciences and environmental engineering.

  6. High-Maneuverability Airframe: Initial Investigation of Configuration’s Aft End for Increased Stability, Range, and Maneuverability

    DTIC Science & Technology

    2013-09-01

    including the interaction effects between the fins and canards. 2. Solution Technique 2.1 Computational Aerodynamics The double-precision solver of a...and overset grids (unified-grid). • Total variation diminishing discretization based on a new multidimensional interpolation framework. • Riemann ... solvers to provide proper signal propagation physics including versions for preconditioned forms of the governing equations. • Consistent and

  7. Analysis of heavy metal sources in soil using kriging interpolation on principal components.

    PubMed

    Ha, Hoehun; Olson, James R; Bian, Ling; Rogerson, Peter A

    2014-05-06

    Anniston, Alabama has a long history of operation of foundries and other heavy industry. We assessed the extent of heavy metal contamination in soils by determining the concentrations of 11 heavy metals (Pb, As, Cd, Cr, Co, Cu, Mn, Hg, Ni, V, and Zn) based on 2046 soil samples collected from 595 industrial and residential sites. Principal Component Analysis (PCA) was adopted to characterize the distribution of heavy metals in soil in this region. In addition, a geostatistical technique (kriging) was used to create regional distribution maps for the interpolation of nonpoint sources of heavy metal contamination using geographical information system (GIS) techniques. There were significant differences found between sampling zones in the concentrations of heavy metals, with the exception of the levels of Ni. Three main components explaining the heavy metal variability in soils were identified. The results suggest that Pb, Cd, Cu, and Zn were associated with anthropogenic activities, such as the operations of some foundries and major railroads, which released these heavy metals, whereas the presence of Co, Mn, and V were controlled by natural sources, such as soil texture, pedogenesis, and soil hydrology. In general terms, the soil levels of heavy metals analyzed in this study were higher than those reported in previous studies in other industrial and residential communities.

  8. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  9. EOS MLS Level 1B Data Processing Software. Version 3

    NASA Technical Reports Server (NTRS)

    Perun, Vincent S.; Jarnot, Robert F.; Wagner, Paul A.; Cofield, Richard E., IV; Nguyen, Honghanh T.; Vuu, Christina

    2011-01-01

    This software is an improvement on Version 2, which was described in EOS MLS Level 1B Data Processing, Version 2.2, NASA Tech Briefs, Vol. 33, No. 5 (May 2009), p. 34. It accepts the EOS MLS Level 0 science/engineering data, and the EOS Aura spacecraft ephemeris/attitude data, and produces calibrated instrument radiances and associated engineering and diagnostic data. This version makes the code more robust, improves calibration, provides more diagnostics outputs, defines the Galactic core more finely, and fixes the equator crossing. The Level 1 processing software manages several different tasks. It qualifies each data quantity using instrument configuration and checksum data, as well as data transmission quality flags. Statistical tests are applied for data quality and reasonableness. The instrument engineering data (e.g., voltages, currents, temperatures, and encoder angles) is calibrated by the software, and the filter channel space reference measurements are interpolated onto the times of each limb measurement with the interpolates being differenced from the measurements. Filter channel calibration target measurements are interpolated onto the times of each limb measurement, and are used to compute radiometric gain. The total signal power is determined and analyzed by each digital autocorrelator spectrometer (DACS) during each data integration. The software converts each DACS data integration from an autocorrelation measurement in the time domain into a spectral measurement in the frequency domain, and estimates separately the spectrally, smoothly varying and spectrally averaged components of the limb port signal arising from antenna emission and scattering effects. Limb radiances are also calibrated.

  10. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    NASA Astrophysics Data System (ADS)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

  11. Quadrature, Interpolation and Observability

    NASA Technical Reports Server (NTRS)

    Hodges, Lucille McDaniel

    1997-01-01

    Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.

  12. Demosaiced pixel super-resolution in digital holography for multiplexed computational color imaging on-a-chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2017-03-01

    Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.

  13. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  14. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  15. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  16. Quantum realization of the nearest neighbor value interpolation method for INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping

    2018-07-01

    This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.

  17. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  18. Generation of a precise DEM by interactive synthesis of multi-temporal elevation datasets: a case study of Schirmacher Oasis, East Antarctica

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    Digital elevation model (DEM) is indispensable for analysis such as topographic feature extraction, ice sheet melting, slope stability analysis, landscape analysis and so on. Such analysis requires a highly accurate DEM. Available DEMs of Antarctic region compiled by using radar altimetry and the Antarctic digital database indicate elevation variations of up to hundreds of meters, which necessitates the generation of local improved DEM. An improved DEM of the Schirmacher Oasis, East Antarctica has been generated by synergistically fusing satellite-derived laser altimetry data from Geoscience Laser Altimetry System (GLAS), Radarsat Antarctic Mapping Project (RAMP) elevation data and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) global elevation data (GDEM). This is a characteristic attempt to generate a DEM of any part of Antarctica by fusing multiple elevation datasets, which is essential to model the ice elevation change and address the ice mass balance. We analyzed a suite of interpolation techniques for constructing a DEM from GLAS, RAMP and ASTER DEM-based point elevation datasets, in order to determine the level of confidence with which the interpolation techniques can generate a better interpolated continuous surface, and eventually improve the elevation accuracy of DEM from synergistically fused RAMP, GLAS and ASTER point elevation datasets. The DEM presented in this work has a vertical accuracy (≈ 23 m) better than RAMP DEM (≈ 57 m) and ASTER DEM (≈ 64 m) individually. The RAMP DEM and ASTER DEM elevations were corrected using differential GPS elevations as ground reference data, and the accuracy obtained after fusing multitemporal datasets is found to be improved than that of existing DEMs constructed by using RAMP or ASTER alone. This is our second attempt of fusing multitemporal, multisensory and multisource elevation data to generate a DEM of Antarctica, in order to address the ice elevation change and address the ice mass balance. Our approach focuses on the strengths of each elevation data source to produce an accurate elevation model.

  19. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  20. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  1. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  2. WE-E-18A-07: MAGIC: Multi-Acquisition Gain Image Correction for Mobile X-Ray Systems with Intrinsic Localization Crosshairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Y; Sharp, G

    2014-06-15

    Purpose: Gain calibration for X-ray imaging systems with movable flat panel detectors (FPD) and intrinsic crosshairs is a challenge due to the geometry dependence of the heel effect and crosshair artifact. This study aims to develop a gain correction method for such systems by implementing the multi-acquisition gain image correction (MAGIC) technique. Methods: Raw flat-field images containing crosshair shadows and heel effect were acquired in 4 different FPD positions with fixed exposure parameters. The crosshair region was automatically detected and substituted with interpolated values from nearby exposed regions, generating a conventional single-image gain-map for each FPD position. Large kernel-based correctionmore » was applied to these images to correct the heel effect. A mask filter was used to invalidate the original cross-hair regions previously filled with the interpolated values. A final, seamless gain-map was created from the processed images by either the sequential filling (SF) or selective averaging (SA) techniques developed in this study. Quantitative evaluation was performed based on detective quantum efficiency improvement factor (DQEIF) for gain-corrected images using the conventional and proposed techniques. Results: Qualitatively, the MAGIC technique was found to be more effective in eliminating crosshair artifacts compared to the conventional single-image method. The mean DQEIF over the range of frequencies from 0.5 to 3.5 mm-1 were 1.09±0.06, 2.46±0.32, and 3.34±0.36 in the crosshair-artifact region and 2.35±0.31, 2.33±0.31, and 3.09±0.34 in the normal region, for the conventional, MAGIC-SF, and MAGIC-SA techniques, respectively. Conclusion: The introduced MAGIC technique is appropriate for gain calibration of an imaging system associated with a moving FPD and an intrinsic crosshair. The technique showed advantages over a conventional single image-based technique by successfully reducing residual crosshair artifacts, and higher image quality with respect to DQE.« less

  3. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  4. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  5. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  6. Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Niu, Xiamu

    2016-01-01

    This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.

  7. Spatial and temporal distributions of (134)Cs and (137)Cs derived from the TEPCO Fukushima Daiichi Nuclear Power Plant accident in the North Pacific Ocean by using optimal interpolation analysis.

    PubMed

    Inomata, Y; Aoyama, M; Tsubono, T; Tsumune, D; Hirose, K

    2016-01-01

    Optimal interpolation (OI) analysis was used to investigate the oceanic distributions of (134)Cs and (137)Cs released from the Tokyo Electric Power Company Fukushima Daiichi Nuclear Power Plant (FNPP1) accident. From the end of March to early April 2011, extremely high activities were observed in the coastal surface seawater near the FNPP1. The high activities spread to a region near 165°E in the western North Pacific Ocean, with a latitudinal center of 40°N. Atmospheric deposition also caused high activities in the region between 180° and 130°W in the North Pacific Ocean. The inventory of FNPP1-released (134)Cs in the North Pacific Ocean was estimated to be 15.3 ± 2.6 PBq. About half of this activity (8.4 ± 2.6 PBq) was found in the coastal region near the FNPP1. After 6 April 2011, when major direct releases ceased, the FNPP1-released (134)Cs in the coastal region decreased exponentially with an apparent half-time of about 4.2 ± 0.5 days and declined to about 2 ± 0.4 PBq by the middle of May 2011. Taking into account that the (134)Cs/(137)Cs activity ratio was about 1 just after release and was extremely uniform during the first month after the accident, the amount of (137)Cs released by the FNPP1 accident increased the North Pacific inventory of (137)Cs due to bomb testing during the 1950s and early 1960s by 20%.

  8. Gapped fermionic spectrum from a domain wall in seven dimension

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Subir; Rai, Nishal

    2018-05-01

    We obtain a domain wall solution in maximally gauged seven dimensional supergravity, which interpolates between two AdS spaces and spontaneously breaks a U (1) symmetry. We analyse frequency dependence of conductivity and find power law behaviour at low frequency. We consider certain fermions of supergravity in the background of this domain wall and compute holographic spectral function of the operators in the dual six dimensional theory. We find fermionic operators involving bosons with non-zero expectation value lead to gapped spectrum.

  9. A data base and analysis program for shuttle main engine dynamic pressure measurements. Appendix F: Data base plots for SSME tests 750-120 through 750-200

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base and data base management system developed to characterize the Space Shuttle Main Engine (SSME) dynamic pressure environment is presented. The data base represents dynamic pressure measurements obtained during single engine hot firing tests of the SSME. Software is provided to permit statistical evaluation of selected measurements under specified operating conditions. An interpolation scheme is also included to estimate spectral trends with SSME power level.

  10. Improving a spatial rainfall product using a data-mining approach and its effect on the hydrological response of a meso-scale catchment.

    NASA Astrophysics Data System (ADS)

    Oriani, F.; Stisen, S.; Demirel, C.

    2017-12-01

    The spatial representation of rainfall is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the daily grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. The research is focused on the Skjern catchment (1,050 km2 western Jutland), where we can dispose of the complete rain-gauge database from the Danish Hydrological Observatory and compute the distributed hydrological response at the 1-km scale.To give a better estimation of the gridded rainfall input, we start from ground measurements by simulating the missing data with a stochastic data-mining approach, then we compute again the grid interpolation. To maximize the predictive power of the technique, combinations of station time-series that are the most informative to each other are selected on the basis of their correlation and available historical data. Then, the missing data inside these time-series are simulated together using the direct sampling technique (DS) [1, 2]. DS simulates a datum by sampling the historical record of the same stations where a similar data pattern occurs, preserving their complex statistical relation. The simulated data are reinjected in the whole dataset and used as well as conditioning data to progressively fill up the gaps in other stations.The results show that the proposed methodology, tested on the period 1995-2012, can increase the realism of the grid rainfall product by regenerating the missing ground measurements. The hydrological response is analyzed considering the observations at 5 hydrological stations. The presented methodology can be used in many regions to regenerate the missing data using the information contained in the historical record and propagate the uncertainty of the prediction to the hydrological response. [1] G.Mariethoz et al. (2010), Water Resour. Res., 10.1029/2008WR007621.[2] F. Oriani et al. (2014), Hydrol. Earth Syst. Sc., 10.5194/hessd-11-3213-2014.

  11. Construction of a 3-arcsecond digital elevation model for the Gulf of Maine

    USGS Publications Warehouse

    Twomey, Erin R.; Signell, Richard P.

    2013-01-01

    A system-wide description of the seafloor topography is a basic requirement for most coastal oceanographic studies. The necessary detail of the topography obviously varies with application, but for many uses, a nominal resolution of roughly 100 m is sufficient. Creating a digital bathymetric grid with this level of resolution can be a complex procedure due to a multiplicity of data sources, data coverages, datums and interpolation procedures. This report documents the procedures used to construct a 3-arcsecond (approximately 90-meter grid cell size) digital elevation model for the Gulf of Maine (71°30' to 63° W, 39°30' to 46° N). We obtained elevation and bathymetric data from a variety of American and Canadian sources, converted all data to the North American Datum of 1983 for horizontal coordinates and the North American Vertical Datum of 1988 for vertical coordinates, used a combination of automatic and manual techniques for quality control, and interpolated gaps using a surface-fitting routine.

  12. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  13. New Mass-Conserving Bedrock Topography for Pine Island Glacier Impacts Simulated Decadal Rates of Mass Loss

    NASA Astrophysics Data System (ADS)

    Nias, I. J.; Cornford, S. L.; Payne, A. J.

    2018-04-01

    High-resolution ice flow modeling requires bedrock elevation and ice thickness data, consistent with one another and with modeled physics. Previous studies have shown that gridded ice thickness products that rely on standard interpolation techniques (such as Bedmap2) can be inconsistent with the conservation of mass, given observed velocity, surface elevation change, and surface mass balance, for example, near the grounding line of Pine Island Glacier, West Antarctica. Using the BISICLES ice flow model, we compare results of simulations using both Bedmap2 bedrock and thickness data, and a new interpolation method that respects mass conservation. We find that simulations using the new geometry result in higher sea level contribution than Bedmap2 and reveal decadal-scale trends in the ice stream dynamics. We test the impact of several sliding laws and find that it is at least as important to accurately represent the bedrock and initial ice thickness as the choice of sliding law.

  14. Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo

    2016-02-01

    Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.

  15. Modeling the Spatial and Temporal Variation of Monthly and Seasonal Precipitation on the Nevada Test Site and Vicinity, 1960-2006

    USGS Publications Warehouse

    Blainey, Joan B.; Webb, Robert H.; Magirl, Christopher S.

    2007-01-01

    The Nevada Test Site (NTS), located in the climatic transition zone between the Mojave and Great Basin Deserts, has a network of precipitation gages that is unusually dense for this region. This network measures monthly and seasonal variation in a landscape with diverse topography. Precipitation data from 125 climate stations on or near the NTS were used to spatially interpolate precipitation for each month during the period of 1960 through 2006 at high spatial resolution (30 m). The data were collected at climate stations using manual and/or automated techniques. The spatial interpolation method, applied to monthly accumulations of precipitation, is based on a distance-weighted multivariate regression between the amount of precipitation and the station location and elevation. This report summarizes the temporal and spatial characteristics of the available precipitation records for the period 1960 to 2006, examines the temporal and spatial variability of precipitation during the period of record, and discusses some extremes in seasonal precipitation on the NTS.

  16. Ranges of North American breeding birds: visualizing long-term population changes in North American breeding birds

    USGS Publications Warehouse

    Price, Jeff

    1995-01-01

    These maps show changes in the distribution and abundance patterns of some North American birds for the last 20 years. For each species there are four maps, each representing the average distribution and abundance pattern over the five-year periods 1970-1974, 1975-1979, 1980-1984, and 1985-1989. The maps are based on data collected by the USFWS/CWS Breeding Bird Survey (BBS). Only BBS routes that were run at least once during each of the five-year periods were used (about 1300 routes). The maps were created in the software package Surfer using a kriging technique to interpolate mean relative abundances for areas where no routes were run. On each map, a portion of northeast Canada was blanked out because there were not enough routes to allow for adequate interpolation. All of the maps in this presentation use the same color scale (shown below). The minimum value mapped was 0.5 birds per route, which represents the edge of the species range.

  17. An Interpolation and Compaction Technique for Gridded Data.

    DTIC Science & Technology

    1983-06-27

    Scientific Research 13. NUMBER OF PAGES Bolling AFB DC 20332 /*. 59 14. MONITORING AGENCY NAME & ADDRESS(if different from Controlling Office) IS...ByBy___{ Distribution/ FINAL REPORT Availabiiity Codes Avail and/or .-. Dist Special Prepared for V; Air Force Office of Scientific Research Boiling Air...Force Base, DC 20332 Under Grand AFOSR-82-0166 • , .4 This report was written for the Air Force Office of Scientific Research under Grant AFOSR-82-0166

  18. Graphics and Flow Visualization of Computer Generated Flow Fields

    NASA Technical Reports Server (NTRS)

    Kathong, M.; Tiwari, S. N.

    1987-01-01

    Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.

  19. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  20. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  1. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  2. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  3. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  4. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  5. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  6. Impact of Geocoding Methods on Associations between Long-term Exposure to Urban Air Pollution and Lung Function

    PubMed Central

    Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy

    2013-01-01

    Background: Errors in address geocodes may affect estimates of the effects of air pollution on health. Objective: We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. Methods: We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant’s address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Results: Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: –0.56, –6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: –0.14, –3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted –2.81 (95% CI: –0.26, –5.35) using NavTEQ, or 2.08 (95% CI –4.63, 0.47, p = 0.11) using Google Maps]. Conclusions: Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model. Citation: Jacquemin B, Lepeule J, Boudier A, Arnould C, Benmerad M, Chappaz C, Ferran J, Kauffmann F, Morelli X, Pin I, Pison C, Rios I, Temam S, Künzli N, Slama R, Siroux V. 2013. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function. Environ Health Perspect 121:1054–1060; http://dx.doi.org/10.1289/ehp.1206016 PMID:23823697

  7. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  8. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  9. Analysis of the numerical differentiation formulas of functions with large gradients

    NASA Astrophysics Data System (ADS)

    Tikhovskaya, S. V.

    2017-10-01

    The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.

  10. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less

  11. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  12. Spatial interpolation of hourly precipitation and dew point temperature for the identification of precipitation phase and hydrologic response in a mountainous catchment

    NASA Astrophysics Data System (ADS)

    Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.

    2012-12-01

    In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing so to support an accurate determination of precipitation phase for assessing catchment hydrologic response to the storm.

  13. Measurement and modeling of particulate matter concentrations: Applying spatial analysis and regression techniques to assess air quality.

    PubMed

    Sajjadi, Seyed Ali; Zolfaghari, Ghasem; Adab, Hamed; Allahabadi, Ahmad; Delsouz, Mehri

    2017-01-01

    This paper presented the levels of PM 2.5 and PM 10 in different stations at the city of Sabzevar, Iran. Furthermore, this study was an attempt to evaluate spatial interpolation methods for determining the PM 2.5 and PM 10 concentrations in the city of Sabzevar. Particulate matters were measured by Haz-Dust EPAM at 48 stations. Then, four interpolating models, including Radial Basis Functions (RBF), Inverse Distance Weighting (IDW), Ordinary Kriging (OK), and Universal Kriging (UK) were used to investigate the status of air pollution in the city. Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) were employed to compare the four models. The results showed that the PM 2.5 concentrations in the stations were between 10 and 500 μg/m 3 . Furthermore, the PM 10 concentrations for all of 48 stations ranged from 20 to 1500 μg/m 3 . The concentrations obtained for the period of nine months were greater than the standard limits. There was difference in the values of MAPE, RMSE, MBE, and MAE. The results indicated that the MAPE in IDW method was lower than other methods: (41.05 for PM 2.5 and 25.89 for PM 10 ). The best interpolation method for the particulate matter (PM 2.5 and PM 10 ) seemed to be IDW method. •The PM 10 and PM 2.5 concentration measurements were performed in the period of warm and risky in terms of particulate matter at 2016.•Concentrations of PM 2.5 and PM 10 were measured by a monitoring device, environmental dust model Haz-Dust EPAM 5000.•Interpolation is used to convert data from observation points to continuous fields to compare spatial patterns sampled by these measurements with spatial patterns of other spatial entities.

  14. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  15. Validation and Improvement of SRTM Performance over Rugged Terrain

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A.

    2004-01-01

    We have previously reported work related to basic technique development in phase unwrapping and generation of digital elevation models (DEM). In the final year of this work we have applied our technique work to the improvement of DEM's produced by SRTM. In particular, we have developed a rigorous mathematical algorithm and means to fill in missing data over rough terrain from other data sets. We illustrate this method by using a higher resolution, but globally less accurate, DEM produced by the TOPSAR airborne instrument over the Galapagos Islands to augment the SRTM data set in this area, We combine this data set with SRTM to use each set to fill in holes left over by the other imaging system. The infilling is done by first interpolating each data set using a prediction error filter that reproduces the same statistical characterization as exhibited by the entire data set within the interpolated region. After this procedure is implemented on each data set, the two are combined on a point by point basis with weights that reflect the accuracy of each data point in its original image. In areas that are better covered by SRTM, TOPSAR data are weighted down but still retain TOPSAR statistics. The reverse is true for regions better covered by TOPSAR. The resulting DEM passes statistical tests and appears quite feasible to the eye, but as this DEM is the best available for the region we cannot fully veri@ its accuracy. Spot checks with GPS points show that locally the technique results in a more comprehensive and accurate map than either data set alone.

  16. Discovery of a Similar to 5 Day Characteristic Timescale in the Kepler Power Spectrum of Zw 229-15

    NASA Technical Reports Server (NTRS)

    Edelson, R.; Vaughan, S.; Malkan, M.; Kelly, B. C.; Smith, K. L.; Boyd, P. T.; Mushotzky, R.

    2014-01-01

    We present time series analyses of the full Kepler dataset of Zw 229- 15. This Kepler light curve- with a baseline greater than three years, composed of virtually continuous, evenly sampled 30-minute measurements - is unprecedented in its quality and precision. We utilize two methods of power spectral analysis to investigate the optical variability and search for evidence of a bend frequency associated with a characteristic optical variability timescale. Each method yields similar results. The first interpolates across data gaps to use the standard Fourier periodogram. The second, using the CARMA-based time-domain modeling technique of Kelly et al., does not need evenly-sampled data. Both methods find excess power at high frequencies that may be due to Kepler instrumental effects. More importantly both also show strong bends (delta alpha is approx. 2) at timescales of approx. 5 days, a feature similar to those seen in the X-ray PSDs of AGN but never before in the optical. This observed approx. 5 day timescale may be associated with one of several physical processes potentially responsible for the variability. A plausible association could be made with light -crossing, dynamical or thermal timescales, depending on the assumed value of the accretion disk size and on unobserved disk parameters such as alpha and H¬R. This timescale is not consistent with the viscous timescale, which would be years in a approx. 10(exp7) solar mass AGN such as Zw 229- 15. However there must be a second bend on long (& 1 year) timescales, and that feature could be associated with the viscous timescale.

  17. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  18. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  19. Interpolation of Water Quality Along Stream Networks from Synoptic Data

    NASA Astrophysics Data System (ADS)

    Lyon, S. W.; Seibert, J.; Lembo, A. J.; Walter, M. T.; Gburek, W. J.; Thongs, D.; Schneiderman, E.; Steenhuis, T. S.

    2005-12-01

    Effective catchment management requires water quality monitoring that identifies major pollutant sources and transport and transformation processes. While traditional monitoring schemes involve regular sampling at fixed locations in the stream, there is an interest synoptic or `snapshot' sampling to quantify water quality throughout a catchment. This type of sampling enables insights to biogeochemical behavior throughout a stream network at low flow conditions. Since baseflow concentrations are temporally persistence, they are indicative of the health of the ecosystems. A major problem with snapshot sampling is the lack of analytical techniques to represent the spatially distributed data in a manner that is 1) easily understood, 2) representative of the stream network, and 3) capable of being used to develop land management scenarios. This study presents a kriging application using the landscape composition of the contributing area along a stream network to define a new distance metric. This allows for locations that are more `similar' to stay spatially close together while less similar locations `move' further apart. We analyze a snapshot sampling campaign consisting of 125 manually collected grab samples during a summer recession flow period in the Townbrook Research Watershed. The watershed is located in the Catskill region of New York State and represents the mixed forest-agriculture land uses of the region. Our initial analysis indicated that stream nutrients (nitrogen and phosphorus) and chemical (major cations and anions) concentrations are controlled by the composition of landscape characteristics (landuse classes and soil types) surrounding the stream. Based on these relationships, an intuitively defined distance metric is developed by combining the traditional distance between observations and the relative difference in composition of contributing area. This metric is used to interpolate between the sampling locations with traditional geostatistic techniques (semivariograms and ordinary kriging). The resulting interpolations provide continuous stream nutrient and chemical concentrations with reduced kriging RMSE (i.e., the interpolation fits the actual data better) performed without path restriction to the stream channel (i.e., the current default for most geostatistical packages) or performed with an in-channel, Euclidean distance metric (i.e., `as the fish swims' distance). In addition to being quantifiably better, the new metric also produces maps of stream concentrations that match expected continuous stream concentrations based on expert knowledge of the watershed. This analysis and its resulting stream concentration maps provide a representation of spatially distributed synoptic data that can be used to quantify water quality for more effective catchment management that focuses on pollutant sources and transport and transformation processes.

  20. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  1. Application of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.

    1990-01-01

    A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.

  2. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  3. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    NASA Astrophysics Data System (ADS)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  4. Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.

    PubMed

    Prochazka, Ivan; Panek, Petr

    2009-07-01

    A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.

  5. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  6. Integration of Geophysical Data into Structural Geological Modelling through Bayesian Networks

    NASA Astrophysics Data System (ADS)

    de la Varga, Miguel; Wellmann, Florian; Murdie, Ruth

    2016-04-01

    Structural geological models are widely used to represent the spatial distribution of relevant geological features. Several techniques exist to construct these models on the basis of different assumptions and different types of geological observations (e.g. Jessell et al., 2014). However, two problems are prevalent when constructing models: (i) observations and assumptions, and therefore also the constructed model, are subject to uncertainties, and (ii) additional information, such as geophysical data, is often available, but cannot be considered directly in the geological modelling step. In our work, we propose the integration of all available data into a Bayesian network including the generation of the implicit geological method by means of interpolation functions (Mallet, 1992; Lajaunie et al., 1997; Mallet, 2004; Carr et al., 2001; Hillier et al., 2014). As a result, we are able to increase the certainty of the resultant models as well as potentially learn features of our regional geology through data mining and information theory techniques. MCMC methods are used in order to optimize computational time and assure the validity of the results. Here, we apply the aforementioned concepts in a 3-D model of the Sandstone Greenstone Belt in the Archean Yilgarn Craton in Western Australia. The example given, defines the uncertainty in the thickness of greenstone as limited by Bouguer anomaly and the internal structure of the greenstone as limited by the magnetic signature of a banded iron formation. The incorporation of the additional data and specially the gravity provides an important reduction of the possible outcomes and therefore the overall uncertainty. References Carr, C. J., K. R. Beatson, B. J. Cherrie, J. T. Mitchell, R. W. Fright, C. B. McCallum, and R. T. Evans, 2001, Reconstruction and representation of 3D objects with radial basis functions: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 67-76. Jessell, M., Aillères, L., de Kemp, E., Lindsay, M., Wellmann, F., Hillier, M., ... & Martin, R. (2014). Next Generation Three-Dimensional Geologic Modeling and Inversion. Lajaunie, C., G. Courrioux, and L. Manuel, 1997, Foliation fields and 3D cartography in geology: Principles of a method based on potential interpolation: Mathematical Geology, 29, 571-584. Mallet, J.-L., 1992, Discrete smooth interpolation in geometric modelling: Computer-Aided Design, 24, 178-191 Mallet, L. J., 2004, Space-time mathematical framework for sedimentary geology: Mathematical Geology, 36, 1-32.

  7. A novel iterative modified bicubic interpolation method enables high-contrast and high-resolution image generation for F-18 FDG-PET.

    PubMed

    Okizaki, Atsutaka; Nakayama, Michihiro; Nakajima, Kaori; Takahashi, Koji

    2017-12-01

    Positron emission tomography (PET) has become a useful and important technique in oncology. However, spatial resolution of PET is not high; therefore, small abnormalities can sometimes be overlooked with PET. To address this problem, we devised a novel algorithm, iterative modified bicubic interpolation method (IMBIM). IMBIM generates high resolution and -contrast image. The purpose of this study was to investigate the utility of IMBIM for clinical FDG positron emission tomography/X-ray computed tomography (PET/CT) imaging.We evaluated PET images from 1435 patients with malignant tumor and compared the contrast (uptake ratio of abnormal lesions to background) in high resolution image with the standard bicubic interpolation method (SBIM) and IMBIM. In addition to the contrast analysis, 340 out of 1435 patients were selected for visual evaluation by nuclear medicine physicians to investigate lesion detectability. Abnormal uptakes on the images were categorized as either absolutely abnormal or equivocal finding.The average of contrast with IMBIM was significantly higher than that with SBIM (P < .001). The improvements were prominent with large matrix sizes and small lesions. SBIM images showed abnormalities in 198 of 340 lesions (58.2%), while IMBIM indicated abnormalities in 312 (91.8%). There was statistically significant improvement in lesion detectability with IMBIM (P < .001).In conclusion, IMBIM generates high-resolution images with improved contrast and, therefore, may facilitate more accurate diagnoses in clinical practice. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  8. Comparison of global sst analyses for atmospheric data assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phoebus, P.A.; Cummings, J.A.

    1995-03-17

    Traditionally, atmospheric models were executed using a climatological estimate of the sea surface temperature (SST) to define the marine boundary layer. More recently, particularly since the deployment of remote sensing instruments and the advent of multichannel SST observations atmospheric models have been improved by using more timely estimates of the actual state of the ocean. Typically, some type of objective analysis is performed using the data from satellites along with ship, buoy, and bathythermograph observations, and perhaps even climatology, to produce a weekly or daily analysis of global SST. Some of the earlier efforts to produce real-time global temperature analysesmore » have been described by Clancy and Pollak (1983) and Reynolds (1988). However, just as new techniques have been developed for atmospheric data assimilation, improvements have been made to ocean data assimilation systems as well. In 1988, the U.S. Navy`s Fleet Numerical Meteorology and Oceanography Center (FNMOC) implemented a global three-dimensional ocean temperature analysis that was based on the optimum interpolation methodology (Clancy et al., 1990). This system, the Optimum Thermal Interpolation System (OTIS 1.0), was initially distributed on a 2.50 resolution grid, and was later modified to generate fields on a 1.250 grid (OTIS 1.1; Clancy et al., 1992). Other optimum interpolation-based analyses (OTIS 3.0) were developed by FNMOC to perform high-resolution three-dimensional ocean thermal analyses in areas with strong frontal gradients and clearly defined water mass characteristics.« less

  9. Estimating changes in urban land and urban population using refined areal interpolation techniques

    NASA Astrophysics Data System (ADS)

    Zoraghein, Hamidreza; Leyk, Stefan

    2018-05-01

    The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead, it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.

  10. Changing Urbania: Estimating Changes in Urban Land and Urban Population Using Refined Areal Interpolation Techniques

    NASA Astrophysics Data System (ADS)

    Zoraghein, H.; Leyk, S.; Balk, D.

    2017-12-01

    The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. The U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, Stuart R.

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  12. Cosmic microwave background constraints for global strings and global monopoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Eiguren, Asier; Lizarraga, Joanes; Urrestilla, Jon

    We present the first cosmic microwave background (CMB) power spectra from numerical simulations of the global O( N ) linear σ-model, with N =2,3, which have global strings and monopoles as topological defects. In order to compute the CMB power spectra we compute the unequal time correlators (UETCs) of the energy-momentum tensor, showing that they fall off at high wave number faster than naive estimates based on the geometry of the defects, indicating non-trivial (anti-)correlations between the defects and the surrounding Goldstone boson field. We obtain source functions for Einstein-Boltzmann solvers from the UETCs, using a recently developed method thatmore » improves the modelling at the radiation-matter transition. We show that the interpolation function that mimics the transition is similar to other defect models, but not identical, confirming the non-universality of the interpolation function. The CMB power spectra for global strings and global monopoles have the same overall shape as those obtained using the non-linear σ-model approximation, which is well captured by a large- N calculation. However, the amplitudes are larger than the large- N calculation would naively predict, and in the case of global strings much larger: a factor of 20 at the peak. Finally we compare the CMB power spectra with the latest CMB data in other to put limits on the allowed contribution to the temperature power spectrum at multipole l = 10 of 1.7% for global strings and 2.4% for global monopoles. These limits correspond to symmetry-breaking scales of 2.9× 10{sup 15} GeV (6.3× 10{sup 14} GeV with the expected logarithmic scaling of the effective string tension between the simulation time and decoupling) and 6.4× 10{sup 15} GeV respectively. The bound on global strings is a significant one for the ultra-light axion scenario with axion masses m {sub a} ∼< 10{sup −28} eV . These upper limits indicate that gravitational waves from global topological defects will not be observable at the gravitational wave observatory LISA.« less

  13. Mass and power modeling of communication satellites

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Pidgeon, David; Tsao, Alex

    1991-01-01

    Analytic estimating relationships for the mass and power requirements for major satellite subsystems are described. The model for each subsystem is keyed to the performance drivers and system requirements that influence their selection and use. Guidelines are also given for choosing among alternative technologies which accounts for other significant variables such as cost, risk, schedule, operations, heritage, and life requirements. These models are intended for application to first order systems analyses, where resources do not warrant detailed development of a communications system scenario. Given this ground rule, the models are simplified to 'smoothed' representation of reality. Therefore, the user is cautioned that cost, schedule, and risk may be significantly impacted where interpolations are sufficiently different from existing hardware as to warrant development of new devices.

  14. The Development of a Degree 360 Expansion of the Dynamic Ocean Topography of the POCM_4B Global Circulation Model

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.

    1998-01-01

    This paper documents the development of a degree 360 expansion of the dynamic ocean topography (DOT) of the POCM_4B ocean circulation model. The principles and software used that led to the final model are described. A key principle was the development of interpolated DOT values into land areas to avoid discontinuities at or near the land/ocean interface. The power spectrum of the POCM_4B is also presented with comparisons made between orthonormal (ON) and spherical harmonic magnitudes to degree 24. A merged file of ON and SH computed degree variances is proposed for applications where the DOT power spectrum from low to high (360) degrees is needed.

  15. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  16. Resurgence and hydrodynamic attractors in Gauss-Bonnet holography

    NASA Astrophysics Data System (ADS)

    Casalderrey-Solana, Jorge; Gushterov, Nikola I.; Meiring, Ben

    2018-04-01

    We study the convergence of the hydrodynamic series in the gravity dual of Gauss-Bonnet gravity in five dimensions with negative cosmological constant via holography. By imposing boost invariance symmetry, we find a solution to the Gauss-Bonnet equation of motion in inverse powers of the proper time, from which we can extract high order corrections to Bjorken flow for different values of the Gauss-Bonnet parameter λGB. As in all other known examples the gradient expansion is, at most, an asymptotic series which can be understood through applying the techniques of Borel-Padé summation. As expected from the behaviour of the quasi-normal modes in the theory, we observe that the singularities in the Borel plane of this series show qualitative features that interpolate between the infinitely strong coupling limit of N=4 Super Yang Mills theory and the expectation from kinetic theory. We further perform the Borel resummation to constrain the behaviour of hydrodynamic attractors beyond leading order in the hydrodynamic expansion. We find that for all values of λGB considered, the convergence of different initial conditions to the resummation and its hydrodynamization occur at large and comparable values of the pressure anisotropy.

  17. Geomagnetic cutoffs: A review for space dosimetry applications

    NASA Astrophysics Data System (ADS)

    Smart, D. F.; Shea, M. A.

    1994-10-01

    The earth's magnetic field acts as a shield against charged particle radiation from interplanetary space, technically described as the geomagnetic cutoff. The cutoff rigidity problem (except for the dipole special case) has 'no solution in closed form'. The dipole case yields the Stormer equation which has been repeatedly applied to the earth in hopes of providing useful approximations of cutoff rigidities. Unfortunately the earth's magnetic field has significant deviations from dipole geometry, and the Stormer cutoffs are not adequate for most applications. By application of massive digital computer power it is possible to determine realistic geomagnetic cutoffs derived from high order simulation of the geomagnetic field. Using this technique, 'world-grids' of directional cutoffs for the earth's surface and for a limited number of satellite altitudes have been derived. However, this approach is so expensive and time comsuming it is impractical for most spacecraft orbits, and approximations must be used. The world grids of cutoff rigidities are extensively used as lookup tables, normalization points and interpolation aids to estimate the effective geomagnetic cutoff rigidity of a specific location in space. We review the various options for estimating the cutoff rigidity for earth-orbiting satellites.

  18. The Effect of Birthrate Granularity on the Release- to- Birth Ratio for the AGR-1 In-core Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawn Scates; John Walter

    The AGR-1 Advanced Gas Reactor (AGR) tristructural-isotropic-particle fuel experiment underwent 13 irradiation intervals from December 2006 until November 2009 within the Idaho National Laboratory Advanced Test Reactor in support of the Next Generation Nuclear Power Plant program. During this multi-year experiment, release-to-birth rate ratios were computed at the end of each operating interval to provide information about fuel performance. Fission products released during irradiation were tracked daily by the Fission Product Monitoring System using 8-hour measurements. Birth rates calculated by MCNP with ORIGEN for as-run conditions were computed at the end of each irradiation interval. Each time step in MCNPmore » provided neutron flux, reaction rates and AGR-1 compact composition, which were used to determine birth rates using ORIGEN. The initial birth-rate data, consisting of four values for each irradiation interval at the beginning, end, and two intermediate times, were interpolated to obtain values for each 8-hour activity. The problem with this method is that any daily changes in heat rates or perturbations, such as shim control movement or core/lobe power fluctuations, would not be reflected in the interpolated data and a true picture of the system would not be presented. At the conclusion of the AGR-1 experiment, great efforts were put forth to compute daily birthrates, which were reprocessed with the 8-hour release activity. The results of this study are presented in this paper.« less

  19. The effect of birthrate granularity on the release-to-birth ratio for the AGR-1 in-core experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. M. Scates; J. B. Walter; J. T. Maki

    The AGR-1 Advanced Gas Reactor (AGR) tristructural-isotropic-particle fuel experiment underwent 13 irradiation intervals from December 2006 until November 2009 within the Idaho National Laboratory Advanced Test Reactor in support of the Next Generation Nuclear Power Plant program. During this multi-year experiment, release-to-birth rate ratios were computed at the end of each operating interval to provide information about fuel performance. Fission products released during irradiation were tracked daily by the Fission Product Monitoring System using 8-h measurements. Birth rate calculated by MCNP with ORIGEN for as-run conditions were computed at the end of each irradiation interval. Each time step in MCNPmore » provided neutron flux, reaction rates and AGR-1 compact composition, which were used to determine birth rate using ORIGEN. The initial birth-rate data, consisting of four values for each irradiation interval at the beginning, end, and two intermediate times, were interpolated to obtain values for each 8-h activity. The problem with this method is that any daily changes in heat rates or perturbations, such as shim control movement or core/lobe power fluctuations, would not be reflected in the interpolated data and a true picture of the system would not be presented. At the conclusion of the AGR-1 experiment, great efforts were put forth to compute daily birthrates, which were reprocessed with the 8-h release activity. The results of this study are presented in this paper.« less

  20. Applications of Lagrangian blending functions for grid generation around airplane geometries

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.

    1990-01-01

    A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.

Top