SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Yang, Wei; Chen, Jie; Zeng, Hong Cheng; Wang, Peng Bo; Liu, Wei
2016-01-01
Based on the terrain observation by progressive scans (TOPS) mode, an efficient full-aperture image formation algorithm for focusing wide-swath spaceborne TOPS data is proposed. First, to overcome the Doppler frequency spectrum aliasing caused by azimuth antenna steering, the range-independent derotation operation is adopted, and the signal properties after derotation are derived in detail. Then, the azimuth deramp operation is performed to resolve image folding in azimuth. The traditional dermap function will introduce a time shift, resulting in appearance of ghost targets and azimuth resolution reduction at the scene edge, especially in the wide-swath coverage case. To avoid this, a novel solution is provided using a modified range-dependent deramp function combined with the chirp-z transform. Moreover, range scaling and azimuth scaling are performed to provide the same azimuth and range sampling interval for all sub-swaths, instead of the interpolation operation for the sub-swath image mosaic. Simulation results are provided to validate the proposed algorithm. PMID:27941706
5-D interpolation with wave-front attributes
NASA Astrophysics Data System (ADS)
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.
Propagation-invariant beams with quantum pendulum spectra: from Bessel beams to Gaussian beam-beams.
Dennis, Mark R; Ring, James D
2013-09-01
We describe a new class of propagation-invariant light beams with Fourier transform given by an eigenfunction of the quantum mechanical pendulum. These beams, whose spectra (restricted to a circle) are doubly periodic Mathieu functions in azimuth, depend on a field strength parameter. When the parameter is zero, pendulum beams are Bessel beams, and as the parameter approaches infinity, they resemble transversely propagating one-dimensional Gaussian wave packets (Gaussian beam-beams). Pendulum beams are the eigenfunctions of an operator that interpolates between the squared angular momentum operator and the linear momentum operator. The analysis reveals connections with Mathieu beams, and insight into the paraxial approximation.
NASA Astrophysics Data System (ADS)
Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu
2017-10-01
The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.
Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.
Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong
2017-11-01
Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.
Retina-like sensor image coordinates transformation and display
NASA Astrophysics Data System (ADS)
Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu
2015-03-01
For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.
Pipelined digital SAR azimuth correlator using hybrid FFT-transversal filter
NASA Technical Reports Server (NTRS)
Wu, C.; Liu, K. Y. (Inventor)
1984-01-01
A synthetic aperture radar system (SAR) having a range correlator is provided with a hybrid azimuth correlator which utilizes a block-pipe-lined fast Fourier transform (FFT). The correlator has a predetermined FFT transform size with delay elements for delaying SAR range correlated data so as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain. The azimuth correlator is comprised of a transversal filter to receive the SAR data in the frequency domain, a generator for range migration compensation and azimuth reference functions, and an azimuth reference multiplier for correlation of the SAR data. Following the transversal filter is a block-pipelined inverse FFT used to restore azimuth correlated data in the frequency domain to the time domain for imaging.
SAR correlation technique - An algorithm for processing data with large range walk
NASA Technical Reports Server (NTRS)
Jin, M.; Wu, C.
1983-01-01
This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.
Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan
NASA Astrophysics Data System (ADS)
Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung
2010-08-01
Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.
Horizontal Contraction of Oceanic Lithosphere Tested Using Azimuths of Transform Faults
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Mishra, J. K.
2012-12-01
A central hypothesis or approximation of plate tectonics is that the plates are rigid, which implies that oceanic lithosphere does not contract horizontally as it cools (hereinafter "no contraction"). An alternative hypothesis is that vertically averaged tensional thermal stress in the competent lithosphere is fully relieved by horizontal thermal contraction (hereinafter "full contraction"). These two hypotheses predict different azimuths for transform faults. We build on prior predictions of horizontal thermal contraction of oceanic lithosphere as a function of age to predict the bias induced in transform-fault azimuths by full contraction for 140 azimuths of transform faults that are globally distributed between 15 plate pairs. Predicted bias increases with the length of adjacent segments of mid-ocean ridges and depends on whether the adjacent ridges are stepped, crenellated, or a combination of the two. All else being equal, the bias decreases with the length of a transform fault and modestly decreases with increasing spreading rate. The value of the bias varies along a transform fault. To correct the observed transform-fault azimuths for the biases, we average the predicted values over the insonified portions of each transform fault. We find the bias to be as large as 2.5°, but more typically is ≤ 1.0°. We test whether correcting for the predicted biases improves the fit to plate motion data. To do so, we determine the sum-squared normalized misfit for various values of γ, which we define to be the fractional multiple of bias predicted for full contraction. γ = 1 corresponds to the full contraction, while γ = 0 corresponds to no contraction. We find that the minimum in sum-squared normalized misfit is obtained for γ = 0.9 ±0.4 (95% confidence limits), which excludes the hypothesis of no contraction, but is consistent with the hypothesis of full contraction. Application of the correction reduces but does not eliminate the longstanding misfit between the azimuth of the Kane transform fault with respect to those of the other North America-Nubia transform faults. We conclude that significant ridge-parallel horizontal thermal contraction occurs in young oceanic lithosphere and that it is accommodated by widening of transform-fault valleys, which causes biases in transform-fault azimuths up to 2.5°.
NASA Astrophysics Data System (ADS)
Zhong, Hua; Zhang, Song; Hu, Jian; Sun, Minhong
2017-12-01
This paper deals with the imaging problem for one-stationary bistatic synthetic aperture radar (BiSAR) with high-squint, large-baseline configuration. In this bistatic configuration, accurate focusing of BiSAR data is a difficult issue due to the relatively large range cell migration (RCM), severe range-azimuth coupling, and inherent azimuth-geometric variance. To circumvent these issues, an enhanced azimuth nonlinear chirp scaling (NLCS) algorithm based on an ellipse model is investigated in this paper. In the range processing, a method combining deramp operation and keystone transform (KT) is adopted to remove linear RCM completely and mitigate range-azimuth cross-coupling. In the azimuth focusing, an ellipse model is established to analyze and depict the characteristic of azimuth-variant Doppler phase. Based on the new model, an enhanced azimuth NLCS algorithm is derived to focus one-stationary BiSAR data. Simulating results exhibited at the end of this paper validate the effectiveness of the proposed algorithm.
Development of variable-magnification X-ray Bragg optics.
Hirano, Keiichi; Yamashita, Yoshiki; Takahashi, Yumiko; Sugiyama, Hiroshi
2015-07-01
A novel X-ray Bragg optics is proposed for variable-magnification of an X-ray beam. This X-ray Bragg optics is composed of two magnifiers in a crossed arrangement, and the magnification factor, M, is controlled through the azimuth angle of each magnifier. The basic properties of the X-ray optics such as the magnification factor, image transformation matrix and intrinsic acceptance angle are described based on the dynamical theory of X-ray diffraction. The feasibility of the variable-magnification X-ray Bragg optics was verified at the vertical-wiggler beamline BL-14B of the Photon Factory. For X-ray Bragg magnifiers, Si(220) crystals with an asymmetric angle of 14° were used. The magnification factor was calculated to be tunable between 0.1 and 10.0 at a wavelength of 0.112 nm. At various magnification factors (M ≥ 1.0), X-ray images of a nylon mesh were observed with an air-cooled X-ray CCD camera. Image deformation caused by the optics could be corrected by using a 2 × 2 transformation matrix and bilinear interpolation method. Not only absorption-contrast but also edge-contrast due to Fresnel diffraction was observed in the magnified images.
Delgutte, Bertrand
2015-01-01
At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292
NASA Astrophysics Data System (ADS)
Marghany, Maged; Ibrahim, Zelina; Van Genderen, Johan
2002-11-01
The present work is used to operationalize the azimuth cut-off concept in the study of significant wave height. Three ERS-1 images have been used along the coastal waters of Terengganu, Malaysia. The quasi-linear transform was applied to map the SAR wave spectra into real ocean wave spectra. The azimuth cut-off was then used to model the significant wave height. The results show that azimuth cut-off varied with the different period of the ERS-1 images. This is because of the fact that the azimuth cut-off is a function of wind speed and significant wave height. It is of interest to find that the significant wave height modeled from azimuth cut-off is in good relation with ground wave conditions. It can be concluded that ERS-1 can be used as a monitoring tool in detecting the significant wave height variation. The azimuth cut-off can be used to model the significant wave height. This means that the quasi-linear transform could be a good application to significant wave height variation during different seasons.
NASA Astrophysics Data System (ADS)
Wrona, Elizabeth; Rowlandson, Tracy L.; Nambiar, Manoj; Berg, Aaron A.; Colliander, Andreas; Marsh, Philip
2017-05-01
This study examines the Soil Moisture Active Passive soil moisture product on the Equal Area Scalable Earth-2 (EASE-2) 36 km Global cylindrical and North Polar azimuthal grids relative to two in situ soil moisture monitoring networks that were installed in 2015 and 2016. Results indicate that there is no relationship between the Soil Moisture Active Passive (SMAP) Level-2 passive soil moisture product and the upscaled in situ measurements. Additionally, there is very low correlation between modeled brightness temperature using the Community Microwave Emission Model and the Level-1 C SMAP brightness temperature interpolated to the EASE-2 Global grid; however, there is a much stronger relationship to the brightness temperature measurements interpolated to the North Polar grid, suggesting that the soil moisture product could be improved with interpolation on the North Polar grid.
NASA Technical Reports Server (NTRS)
Staffanson, F. L.
1981-01-01
The FORTRAN computer program RAWINPROC accepts output from NASA Wallops computer program METPASS1; and produces input for NASA computer program 3.0.0700 (ECC-PRD). The three parts together form a software system for the completely automatic reduction of standard RAWINSONDE sounding data. RAWINPROC pre-edits the 0.1-second data, including time-of-day, azimuth, elevation, and sonde-modulated tone frequency, condenses the data according to successive dwells of the tone frequency, decommutates the condensed data into the proper channels (temperature, relative humidity, high and low references), determines the running baroswitch contact number and computes the associated pressure altitudes, and interpolates the data appropriate for input to ACC-PRD.
Stress direction history of the western United States and Mexico since 85 Ma
NASA Astrophysics Data System (ADS)
Bird, Peter
2002-06-01
A data set of 369 paleostress direction indicators (sets of dikes, veins, or fault slip vectors) is collected from previous compilations and the geologic literature. Like contemporary data, these stress directions show great variability, even over short distances. Therefore statistical methods are helpful in deciding which apparent variations in space or in time are significant. First, the interpolation technique of Bird and Li [1996] is used to interpolate stress directions to a grid of evenly spaced points in each of seventeen 5-m.y. time steps since 85 Ma. Then, a t test is used to search for stress direction changes between pairs of time windows whose sense can be determined with some minimum confidence. Available data cannot resolve local stress provinces, and only the broadest changes affecting country-sized regions are reasonably certain. During 85-50 Ma, the most compressive horizontal stress azimuth $\\hat \\sigma $1H was fairly constant at ~68° (United States) to 75° (Mexico). During 50-35 Ma, both counterclockwise stress changes (in the Pacific Northwest) and clockwise stress changes (from Nevada to New Mexico) are seen, but only locally and with about 50% confidence. A major stress azimuth change by ~90° occurred at 33 +/- 2 Ma in Mexico and at 30 +/- 2 Ma in the western United States. This was probably an interchange between $\\hat \\sigma $1 and $\\hat \\sigma $3 caused by a decrease in horizontal compression and/or an increase in vertical compression. The most likely cause was the rollback of horizontally subducting Farallon slab from under the southwestern United States and northwest Mexico, which was rapid during 35-25 Ma. After this transition, a clockwise rotation of principal stress axes by 36°-48° occurred more gradually since 22 Ma, affecting the region between latitudes 28°N and 41°N. This occurred as the lengthening Pacific/North America transform boundary gradually added dextral shear on northwest striking planes to the previous stress field of SW-NE extension.
The rigid-plate and shrinking-plate hypotheses: Implications for the azimuths of transform faults
NASA Astrophysics Data System (ADS)
Mishra, Jay Kumar; Gordon, Richard G.
2016-08-01
The rigid-plate hypothesis implies that oceanic lithosphere does not contract horizontally as it cools (hereinafter "rigid plate"). An alternative hypothesis, that vertically averaged tensional thermal stress in the competent lithosphere is fully relieved by horizontal thermal contraction (hereinafter "shrinking plate"), predicts subtly different azimuths for transform faults. The size of the predicted difference is as large as 2.44° with a mean and median of 0.46° and 0.31°, respectively, and changes sign between right-lateral (RL)-slipping and left-lateral (LL)-slipping faults. For the MORVEL transform-fault data set, all six plate pairs with both RL- and LL-slipping faults differ in the predicted sense, with the observed difference averaging 1.4° ± 0.9° (95% confidence limits), which is consistent with the predicted difference of 0.9°. The sum-squared normalized misfit, r, to global transform-fault azimuths is minimized for γ = 0.8 ± 0.4 (95% confidence limits), where γ is the fractional multiple of the predicted difference in azimuth between the shrinking-plate (γ = 1) and rigid-plate (γ = 0) hypotheses. Thus, observed transform azimuths differ significantly between RL-slipping and LL-slipping faults, which is inconsistent with the rigid-plate hypothesis but consistent with the shrinking-plate hypothesis, which indicates horizontal shrinking rates of 2% Ma-1 for newly created lithosphere, 1% Ma-1 for 0.1 Ma old lithosphere, 0.2% Ma-1 for 1 Ma old lithosphere, and 0.02% Ma-1 for 10 Ma old lithosphere, which are orders of magnitude higher than the mean intraplate seismic strain rate of 10-6 Ma-1 (5 × 10-19 s-1).
Multiprocessor computer overset grid method and apparatus
Barnette, Daniel W.; Ober, Curtis C.
2003-01-01
A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.
Stevensson, Baltzar; Edén, Mattias
2011-03-28
We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Power transformations improve interpolation of grids for molecular mechanics interaction energies.
Minh, David D L
2018-02-18
A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Steinwand, Daniel R.; Hutchinson, John A.; Snyder, J.P.
1995-01-01
In global change studies the effects of map projection properties on data quality are apparent, and the choice of projection is significant. To aid compilers of global and continental data sets, six equal-area projections were chosen: the interrupted Goode Homolosine, the interrupted Mollweide, the Wagner IV, and the Wagner VII for global maps; the Lambert Azimuthal Equal-Area for hemisphere maps; and the Oblated Equal-Area and the Lambert Azimuthal Equal-Area for continental maps. Distortions in small-scale maps caused by reprojection, and the additional distortions incurred when reprojecting raster images, were quantified and graphically depicted. For raster images, the errors caused by the usual resampling methods (pixel brightness level interpolation) were responsible for much of the additional error where the local resolution and scale change were the greatest.
Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.
Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming
2018-01-01
Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.
The analysis of decimation and interpolation in the linear canonical transform domain.
Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li
2016-01-01
Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
The Interpolation Theory of Radial Basis Functions
NASA Astrophysics Data System (ADS)
Baxter, Brad
2010-06-01
In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.
Method of Determining the Aerodynamic Characteristics of a Flying Vehicle from the Surface Pressure
NASA Astrophysics Data System (ADS)
Volkov, V. F.; Dyad'kin, A. A.; Zapryagaev, V. I.; Kiselev, N. P.
2017-11-01
The paper presents a description of the procedure used for determining the aerodynamic characteristics (forces and moments acting on a model of a flying vehicle) obtained from the results of pressure measurements on the surface of a model of a re-entry vehicle with operating retrofire brake rockets in the regime of hovering over a landing surface is given. The algorithm for constructing the interpolation polynomial over interpolation nodes in the radial and azimuthal directions using the assumption on the symmetry of pressure distribution over the surface is presented. The aerodynamic forces and moments at different tilts of the vehicle are obtained. It is shown that the aerodynamic force components acting on the vehicle in the regime of landing and caused by the action of the vertical velocity deceleration nozzle jets are negligibly small in comparison with the engine thrust.
Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.
Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing
2016-10-01
The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Chang, Nai-Fu; Chiang, Cheng-Yi; Chen, Tung-Chien; Chen, Liang-Gee
2011-01-01
On-chip implementation of Hilbert-Huang transform (HHT) has great impact to analyze the non-linear and non-stationary biomedical signals on wearable or implantable sensors for the real-time applications. Cubic spline interpolation (CSI) consumes the most computation in HHT, and is the key component for the HHT processor. In tradition, CSI in HHT is usually performed after the collection of a large window of signals, and the long latency violates the realtime requirement of the applications. In this work, we propose to keep processing the incoming signals on-line with small and overlapped data windows without sacrificing the interpolation accuracy. 58% multiplication and 73% division of CSI are saved after the data reuse between the data windows.
de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry
2016-01-01
In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342
de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry
2016-08-01
In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.
The Azimuth Structure of Nuclear Collisions — I
NASA Astrophysics Data System (ADS)
Trainor, Thomas A.; Kettler, David T.
We describe azimuth structure commonly associated with elliptic and directed flow in the context of 2D angular autocorrelations for the purpose of precise separation of so-called nonflow (mainly minijets) from flow. We extend the Fourier-transform description of azimuth structure to include power spectra and autocorrelations related by the Wiener-Khintchine theorem. We analyze several examples of conventional flow analysis in that context and question the relevance of reaction plane estimation to flow analysis. We introduce the 2D angular autocorrelation with examples from data analysis and describe a simulation exercise which demonstrates precise separation of flow and nonflow using the 2D autocorrelation method. We show that an alternative correlation measure based on Pearson's normalized covariance provides a more intuitive measure of azimuth structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Interpolating seismic data via the POCS method based on shearlet transform
NASA Astrophysics Data System (ADS)
Jicheng, Liu; Yongxin, Chou; Jianjiang, Zhu
2018-06-01
A method based on shearlet transform and the projection onto convex sets with L0-norm constraint is proposed to interpolate irregularly sampled 2D and 3D seismic data. The 2D directional filter of shearlet transform is constructed by modulating a low-pass diamond filter pair to minimize the effect of additional edges introduced by the missing traces. In order to abate the spatial aliasing and control the maximal gap between missing traces for a 3D data cube, a 2D separable jittered sampling strategy is discussed. Finally, numerical experiments on 2D and 3D synthetic and real data with different under-sampling rates prove the validity of the proposed method.
Walimbe, Vivek; Shekhar, Raj
2006-12-01
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang
2016-01-01
Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341
Interpolation algorithm for asynchronous ADC-data
NASA Astrophysics Data System (ADS)
Bramburger, Stefan; Zinke, Benny; Killat, Dirk
2017-09-01
This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
NASA Astrophysics Data System (ADS)
Li, Kesai; Gao, Jie; Ju, Xiaodong; Zhu, Jun; Xiong, Yanchun; Liu, Shuai
2018-05-01
This paper proposes a new tool design of ultra-deep azimuthal electromagnetic (EM) resistivity logging while drilling (LWD) for deeper geosteering and formation evaluation, which can benefit hydrocarbon exploration and development. First, a forward numerical simulation of azimuthal EM resistivity LWD is created based on the fast Hankel transform (FHT) method, and its accuracy is confirmed under classic formation conditions. Then, a reasonable range of tool parameters is designed by analyzing the logging response. However, modern technological limitations pose challenges to selecting appropriate tool parameters for ultra-deep azimuthal detection under detectable signal conditions. Therefore, this paper uses grey relational analysis (GRA) to quantify the influence of tool parameters on voltage and azimuthal investigation depth. After analyzing thousands of simulation data under different environmental conditions, the random forest is used to fit data and identify an optimal combination of tool parameters due to its high efficiency and accuracy. Finally, the structure of the ultra-deep azimuthal EM resistivity LWD tool is designed with a theoretical azimuthal investigation depth of 27.42-29.89 m in classic different isotropic and anisotropic formations. This design serves as a reliable theoretical foundation for efficient geosteering and formation evaluation in high-angle and horizontal (HA/HZ) wells in the future.
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
Minimized-Laplacian residual interpolation for color image demosaicking
NASA Astrophysics Data System (ADS)
Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi
2014-03-01
A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.
A fully 3D approach for metal artifact reduction in computed tomography.
Kratz, Barbel; Weyers, Imke; Buzug, Thorsten M
2012-11-01
In computed tomography imaging metal objects in the region of interest introduce inconsistencies during data acquisition. Reconstructing these data leads to an image in spatial domain including star-shaped or stripe-like artifacts. In order to enhance the quality of the resulting image the influence of the metal objects can be reduced. Here, a metal artifact reduction (MAR) approach is proposed that is based on a recomputation of the inconsistent projection data using a fully three-dimensional Fourier-based interpolation. The success of the projection space restoration depends sensitively on a sensible continuation of neighboring structures into the recomputed area. Fortunately, structural information of the entire data is inherently included in the Fourier space of the data. This can be used for a reasonable recomputation of the inconsistent projection data. The key step of the proposed MAR strategy is the recomputation of the inconsistent projection data based on an interpolation using nonequispaced fast Fourier transforms (NFFT). The NFFT interpolation can be applied in arbitrary dimension. The approach overcomes the problem of adequate neighborhood definitions on irregular grids, since this is inherently given through the usage of higher dimensional Fourier transforms. Here, applications up to the third interpolation dimension are presented and validated. Furthermore, prior knowledge may be included by an appropriate damping of the transform during the interpolation step. This MAR method is applicable on each angular view of a detector row, on two-dimensional projection data as well as on three-dimensional projection data, e.g., a set of sequential acquisitions at different spatial positions, projection data of a spiral acquisition, or cone-beam projection data. Results of the novel MAR scheme based on one-, two-, and three-dimensional NFFT interpolations are presented. All results are compared in projection data space and spatial domain with the well-known one-dimensional linear interpolation strategy. In conclusion, it is recommended to include as much spatial information into the recomputation step as possible. This is realized by increasing the dimension of the NFFT. The resulting image quality can be enhanced considerably.
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
Juang, K W; Lee, D Y; Ellsworth, T R
2001-01-01
The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
1974-09-07
ellipticity filter. The source waveforms are recreated by an inverse transform of those complex ampli- tudes associated with the same azimuth...terms of the three complex data points and the ellipticity. Having solved the equations for all frequency bins, the inverse transform of...Transform of those complex amplitudes associated with Source 1, yielding the signal a (t). Similarly, take the inverse Transform of all
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pimentel, David A.; Sheppard, Daniel G.
It was recently demonstrated that EOSPAC 6 continued to incorrectly create and interpolate pre-inverted SESAME data tables after the release of version 6.3.2beta.2. Significant interpolation pathologies were discovered to occur when EOSPAC 6's host software enabled pre-inversion with the EOS_INVERT_AT_SETUP option. This document describes a solution that uses data transformations found in EOSPAC 5 and its predecessors. The numerical results and performance characteristics of both the default and pre-inverted interpolation modes in both EOSPAC 6.3.2beta.2 and the fixed logic of EOSPAC 6.4.0beta.1 are presented herein, and the latter software release is shown to produce significantly-improved numerical results for the pre-invertedmore » interpolation mode.« less
Design and optimization of color lookup tables on a simplex topology.
Monga, Vishal; Bala, Raja; Mo, Xuan
2012-04-01
An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.
Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Lu, Wenkai
2017-12-01
Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.
Servomechanism for Doppler shift compensation in optical correlator for synthetic aperture radar
NASA Technical Reports Server (NTRS)
Constaninides, N. J.; Bicknell, T. J. (Inventor)
1980-01-01
A method and apparatus for correcting Doppler shifts in synthetic aperture radar data is described. An optical correlator for synthetic aperture radar data has a means for directing a laser beam at a signal film having radar return pulse intensity information recorded on it. A resultant laser beam passes through a range telescope, an azimuth telescope, and a Fourier transform filter located between the range and azimuth telescopes, and forms an image for recording on an image film. A compensation means for Doppler shift in the radar return pulse intensity information includes a beam splitter for reflecting the modulated laser beam, after having passed through the Fourier transform filter, to a detection screen having two photodiodes mounted on it.
Joint seismic data denoising and interpolation with double-sparsity dictionary learning
NASA Astrophysics Data System (ADS)
Zhu, Lingchen; Liu, Entao; McClellan, James H.
2017-08-01
Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.
Pérez, Alejandro; von Lilienfeld, O Anatole
2011-08-09
Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.
NASA Astrophysics Data System (ADS)
Zhang, Tuo; Gordon, Richard G.; Mishra, Jay K.; Wang, Chengzu
2017-08-01
Using global multiresolution topography, we estimate new transform-fault azimuths along the Cocos-Nazca plate boundary and show that the direction of relative plate motion is 3.3° ± 1.8° (95% confidence limits) clockwise of prior estimates. The new direction of Cocos-Nazca plate motion is, moreover, 4.9° ± 2.7° (95% confidence limits) clockwise of the azimuth of the Panama transform fault. We infer that the plate east of the Panama transform fault is not the Nazca plate but instead is a microplate that we term the Malpelo plate. With the improved transform-fault data, the nonclosure of the Nazca-Cocos-Pacific plate motion circuit is reduced from 15.0 mm a-1 ± 3.8 mm a-1 to 11.6 mm a-1 ± 3.8 mm a-1 (95% confidence limits). The nonclosure seems too large to be due entirely to horizontal thermal contraction of oceanic lithosphere and suggests that one or more additional plate boundaries remain to be discovered.
An Immersed Boundary method with divergence-free velocity interpolation and force spreading
NASA Astrophysics Data System (ADS)
Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.
2017-10-01
The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
Quantum realization of the nearest neighbor value interpolation method for INEQR
NASA Astrophysics Data System (ADS)
Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping
2018-07-01
This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
NASA Astrophysics Data System (ADS)
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
Post-earthquake relaxation using a spectral element method: 2.5-D case
Pollitz, Fred
2014-01-01
The computation of quasi-static deformation for axisymmetric viscoelastic structures on a gravitating spherical earth is addressed using the spectral element method (SEM). A 2-D spectral element domain is defined with respect to spherical coordinates of radius and angular distance from a pole of symmetry, and 3-D viscoelastic structure is assumed to be azimuthally symmetric with respect to this pole. A point dislocation source that is periodic in azimuth is implemented with a truncated sequence of azimuthal order numbers. Viscoelasticity is limited to linear rheologies and is implemented with the correspondence principle in the Laplace transform domain. This leads to a series of decoupled 2-D problems which are solved with the SEM. Inverse Laplace transform of the independent 2-D solutions leads to the time-domain solution of the 3-D equations of quasi-static equilibrium imposed on a 2-D structure. The numerical procedure is verified through comparison with analytic solutions for finite faults embedded in a laterally homogeneous viscoelastic structure. This methodology is applicable to situations where the predominant structure varies in one horizontal direction, such as a structural contrast across (or parallel to) a long strike-slip fault.
Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang
2017-01-01
Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487
Minami, K; Kawata, S; Minami, S
1992-10-10
The real-zero interpolation method is applied to a Fourier-transformed infrared (FT-IR) interferogram. With this method an interferogram is reconstructed from its zero-crossing information only, without the use of a long-word analog-to-digital converter. We installed a phase-locked loop circuit into an FT-IR spectrometer for oversampling the interferogram. Infrared absorption spectra of polystyrene and Mylar films were measured as binary interferograms by the FT-IR spectrometer, which was equipped with the developed circuits, and their Fourier spectra were successfully reconstructed. The relationship of the oversampling ratio to the dynamic range of the reconstructed interferogram was evaluated through computer simulations. We also discuss the problems of this method for practical applications.
Transform Decoding of Reed-Solomon Codes. Volume I. Algorithm and Signal Processing Structure
1982-11-01
systematic channel co.’e. 1. lake the inverse transform of the r- ceived se, - nee. 2. Isolate the error syndrome from the inverse transform and use... inverse transform is identic l with interpolation of the polynomial a(z) from its n values. In order to generate a Reed-Solomon (n,k) cooce, we let the set...in accordance with the transform of equation (4). If we were to apply the inverse transform of equa- tion (6) to the coefficient sequence of A(z), we
Automated infrasound signal detection algorithms implemented in MatSeis - Infra Tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Darren
2004-07-01
MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefrontsmore » at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and the detection window closely correlated to the theoretical stratospheric arrival time. Further testing will be required for tuning of detection threshold parameters for different types of infrasound events.« less
Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang
2018-02-20
We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.
Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi
2010-09-01
The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.
Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion
NASA Astrophysics Data System (ADS)
Tiezhao, B.; Ning, J.; Jianwei, M.
2017-12-01
Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.
2.5D S-wave velocity model of the TESZ area in northern Poland from receiver function analysis
NASA Astrophysics Data System (ADS)
Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek
2016-04-01
Receiver function (RF) locally provides the signature of sharp seismic discontinuities and information about the shear wave (S-wave) velocity distribution beneath the seismic station. The data recorded by "13 BB Star" broadband seismic stations (Grad et al., 2015) and by few PASSEQ broadband seismic stations (Wilde-Piórko et al., 2008) are analysed to investigate the crustal and upper mantle structure in the Trans-European Suture Zone (TESZ) in northern Poland. The TESZ is one of the most prominent suture zones in Europe separating the young Palaeozoic platform from the much older Precambrian East European craton. Compilation of over thirty deep seismic refraction and wide angle reflection profiles, vertical seismic profiling in over one hundred thousand boreholes and magnetic, gravity, magnetotelluric and thermal methods allowed for creation a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Grad et al. 2016). On the other hand the receiver function methods give an opportunity for creation the S-wave velocity model. Modified ray-tracing method (Langston, 1977) are used to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. 3D P-wave velocity model are interpolated to 2.5D P-wave velocity model beneath each seismic station and synthetic back-azimuthal sections of receiver function are calculated for different Vp/Vs ratio. Densities are calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Next, the synthetic back-azimuthal sections of RF are compared with observed back-azimuthal sections of RF for "13 BB Star" and PASSEQ seismic stations to find the best 2.5D S-wave models down to 60 km depth. National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284.
Changes in measured vector magnetic fields when transformed into heliographic coordinates
NASA Technical Reports Server (NTRS)
Hagyard, M. J.
1987-01-01
The changes that occur in measured magnetic fields when they are transformed into a heliographic coordinate system are investigated. To carry out this investigation, measurements of the vector magnetic field of an active region that was observed at 1/3 the solar radius from disk center are taken, and the observed field is transformed into heliographic coordinates. Differences in the calculated potential field that occur when the heliographic normal component of the field is used as the boundary condition rather than the observed line-of-sight component are also examined. The results of this analysis show: (1) that the observed fields of sunspots more closely resemble the generally accepted picture of the distribution of umbral fields if they are displayed in heliographic coordinates; (2) that the differences in the potential calculations are less than 200 G in field strength and 20 deg in field azimuth outside sunspots; and (3) that differences in the two potential calculations in the sunspot areas are no more than 400 G in field strength but range from 60 to 80 deg in field azimuth in localized umbral areas.
Matsushima, Kyoji
2008-07-01
Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.
Transform Decoding of Reed-Solomon Codes. Volume II. Logical Design and Implementation.
1982-11-01
i A. nE aib’ = a(bJ) ; j=0, 1, ... , n-l (2-8) i=01 Similarly, the inverse transform is obtained by interpolation of the polynomial a(z) from its n...with the transform so that either a forward or an inverse transform may be used to encode. The only requirement is that tie reverse of the encoding... inverse transform of the received sequence is the polynomial sum r(z) = e(z) + a(z), where e(z) is the inverse transform of the error polynomial E(z), and a
NASA Astrophysics Data System (ADS)
Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.
2007-09-01
When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.
Adaptive Multilinear Tensor Product Wavelets
Weiss, Kenneth; Lindstrom, Peter
2015-08-12
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less
Scene segmentation of natural images using texture measures and back-propagation
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Phatak, Anil; Chatterji, Gano
1993-01-01
Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jae Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr; Department of Chemistry, Pohang University of Science and Technology
2014-04-28
Simulating molecular dynamics directly on quantum chemically obtained potential energy surfaces is generally time consuming. The cost becomes overwhelming especially when excited state dynamics is aimed with multiple electronic states. The interpolated potential has been suggested as a remedy for the cost issue in various simulation settings ranging from fast gas phase reactions of small molecules to relatively slow condensed phase dynamics with complex surrounding. Here, we present a scheme for interpolating multiple electronic surfaces of a relatively large molecule, with an intention of applying it to studying nonadiabatic behaviors. The scheme starts with adiabatic potential information and its diabaticmore » transformation, both of which can be readily obtained, in principle, with quantum chemical calculations. The adiabatic energies and their derivatives on each interpolation center are combined with the derivative coupling vectors to generate the corresponding diabatic Hamiltonian and its derivatives, and they are subsequently adopted in producing a globally defined diabatic Hamiltonian function. As a demonstration, we employ the scheme to build an interpolated Hamiltonian of a relatively large chromophore, para-hydroxybenzylidene imidazolinone, in reference to its all-atom analytical surface model. We show that the interpolation is indeed reliable enough to reproduce important features of the reference surface model, such as its adiabatic energies and derivative couplings. In addition, nonadiabatic surface hopping simulations with interpolation yield population transfer dynamics that is well in accord with the result generated with the reference analytic surface. With these, we conclude by suggesting that the interpolation of diabatic Hamiltonians will be applicable for studying nonadiabatic behaviors of sizeable molecules.« less
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
Multiresolution image registration in digital x-ray angiography with intensity variation modeling.
Nejati, Mansour; Pourghassem, Hossein
2014-02-01
Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
Tyrk, Mateusz A; Zolotovskaya, Svetlana A; Gillespie, W Allan; Abdolvand, Amin
2015-09-07
Radially and azimuthally polarized picosecond (~10 ps) pulsed laser irradiation at 532 nm wavelength led to the permanent reshaping of spherical silver nanoparticles (~30 - 40 nm in diameter) embedded in a thin layer of soda-lime glass. The observed peculiar shape modifications consist of a number of different orientations of nano-ellipsoids in the cross-section of each written line by laser. A Second Harmonic Generation cross-sectional scan method from silver nanoparticles in transmission geometry was adopted for characterization of the samples after laser modification. The presented approach may lead to sophisticated marking of information in metal-glass nanocomposites.
Topological transformation of fractional optical vortex beams using computer generated holograms
NASA Astrophysics Data System (ADS)
Maji, Satyajit; Brundavanam, Maruthi M.
2018-04-01
Optical vortex beams with fractional topological charges (TCs) are generated by the diffraction of a Gaussian beam using computer generated holograms embedded with mixed screw-edge dislocations. When the input Gaussian beam has a finite wave-front curvature, the generated fractional vortex beams show distinct topological transformations in comparison to the integer charge optical vortices. The topological transformations at different fractional TCs are investigated through the birth and evolution of the points of phase singularity, the azimuthal momentum transformation, occurrence of critical points in the transverse momentum and the vorticity around the singular points. This study is helpful to achieve better control in optical micro-manipulation applications.
Maury, Augusto; Revilla, Reynier I
2015-08-01
Cosmic rays (CRs) occasionally affect charge-coupled device (CCD) detectors, introducing large spikes with very narrow bandwidth in the spectrum. These CR features can distort the chemical information expressed by the spectra. Consequently, we propose here an algorithm to identify and remove significant spikes in a single Raman spectrum. An autocorrelation analysis is first carried out to accentuate the CRs feature as outliers. Subsequently, with an adequate selection of the threshold, a discrete wavelet transform filter is used to identify CR spikes. Identified data points are then replaced by interpolated values using the weighted-average interpolation technique. This approach only modifies the data in a close vicinity of the CRs. Additionally, robust wavelet transform parameters are proposed (a desirable property for automation) after optimizing them with the application of the method in a great number of spectra. However, this algorithm, as well as all the single-spectrum analysis procedures, is limited to the cases in which CRs have much narrower bandwidth than the Raman bands. This might not be the case when low-resolution Raman instruments are used.
Attenuation of multiples in image space
NASA Astrophysics Data System (ADS)
Alvarez, Gabriel F.
In complex subsurface areas, attenuation of 3D specular and diffracted multiples in data space is difficult and inaccurate. In those areas, image space is an attractive alternative. There are several reasons: (1) migration increases the signal-to-noise ratio of the data; (2) primaries are mapped to coherent events in Subsurface Offset Domain Common Image Gathers (SODCIGs) or Angle Domain Common Image Gathers (ADCIGs); (3) image space is regular and smaller; (4) attenuating the multiples in data space leaves holes in the frequency-Wavenumber space that generate artifacts after migration. I develop a new equation for the residual moveout of specular multiples in ADCIGs and use it for the kernel of an apex-shifted Radon transform to focus and separate the primaries from specular and diffracted multiples. Because of small amplitude, phase and kinematic errors in the multiple estimate, we need adaptive matching and subtraction to estimate the primaries. I pose this problem as an iterative least-squares inversion that simultaneously matches the estimates of primaries and multiples to the data. Standard methods match only the estimate of the multiples. I demonstrate with real and synthetic data that the method produces primaries and multiples with little cross-talk. In 3D, the multiples exhibit residual moveout in SODCIGs in in-line and cross-line offsets. They map away from zero subsurface offsets when migrated with the faster velocity of the primaries. In ADCIGs the residual moveout of the primaries as a function of the aperture angle, for a given azimuth, is flat for those angles that illuminate the reflector. The multiples have residual moveout towards increasing depth for increasing aperture angles at all azimuths. As a function of azimuth, the primaries have better azimuth resolution than the multiples at larger aperture angles. I show, with a real 3D dataset, that even below salt, where illumination is poor, the multiples are well attenuated in ADCIGs with the new Radon transform in planes of azimuth-stacked ADCIGs. The angle stacks of the estimated primaries show little residual multiple energy.
Acousto-optic time- and space-integrating spotlight-mode SAR processor
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.
1993-09-01
The technical approach and recent experimental results for the acousto-optic time- and space- integrating real-time SAR image formation processor program are reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results include a demonstration of the processor's ability to perform high-resolution spotlight-mode SAR imaging by simultaneously compensating for range migration and range/azimuth coupling in the analog optical domain, thereby avoiding a highly power-consuming digital interpolation or reformatting operation usually required in all-electronic approaches.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
Single image super resolution algorithm based on edge interpolation in NSCT domain
NASA Astrophysics Data System (ADS)
Zhang, Mengqun; Zhang, Wei; He, Xinyu
2017-11-01
In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
Student Misconceptions in Introductory Biology.
ERIC Educational Resources Information Center
Fisher, Kathleen M.; Lipson, Joseph I.
Defining a "misconception" as an error of translation (transformation, correspondence, interpolation, interpretation) between two different kinds of information which causes students to have incorrect expectations, a Taxonomy of Errors has been developed to examine student misconceptions in an introductory biology course for science…
When Interpolation-Induced Reflection Artifact Meets Time-Frequency Analysis.
Lin, Yu-Ting; Flandrin, Patrick; Wu, Hau-Tieng
2016-10-01
While extracting the temporal dynamical features based on the time-frequency analyses, like the reassignment and synchrosqueezing transform, attracts more and more interest in biomedical data analysis, we should be careful about artifacts generated by interpolation schemes, in particular when the sampling rate is not significantly higher than the frequency of the oscillatory component we are interested in. We formulate the problem called the reflection effect and provide a theoretical justification of the statement. We also show examples in the anesthetic depth analysis with clear but undesirable artifacts. The artifact associated with the reflection effect exists not only theoretically but practically as well. Its influence is pronounced when we apply the time-frequency analyses to extract the time-varying dynamics hidden inside the signal. We have to carefully deal with the artifact associated with the reflection effect by choosing a proper interpolation scheme.
NASA Astrophysics Data System (ADS)
Farag, Karam S. I.; Abd El-Aal, Mohamed H.; Garamoon, Hassan K. F.
2018-07-01
A joint azimuthal very low frequency-electromagnetic (VLF-EM) and DC-resistivity sounding survey was conducted at the new Ain Shams university campus in Al-Obour city, northwest of Cairo, Egypt. The main objective of the survey was to highlight the applicability and reliability of such non-invasive surface techniques in mapping and monitoring both the vertical and lateral electrical conductivity structures of waterlogged areas, by subterraneous water accumulations, at the campus site. Consequently, a total of 743 azimuthal VLF-EM and 4 DC-resistivity soundings were carried out in June, 2011, 2012 and 2013. The data were interpreted extensively and consistently in terms of two-dimensional (2D) transformed EM equivalent current-density and stitched inverted electrical resistivity models, without using any geological a-priori information. They could be used effectively to image the local anomalous lower electrical resistivity (higher EM equivalent current-density) structures and their near-surface spreading with time, due to the excessive accumulations of subterraneous water at the campus site. The study demonstrated that a regional azimuthal VLF-EM and DC-resistivity sounding survey could help design an optimal dewatering program for the whole city, at greatly reduced execution time.
The fractional Fourier transform and applications
NASA Technical Reports Server (NTRS)
Bailey, David H.; Swarztrauber, Paul N.
1991-01-01
This paper describes the 'fractional Fourier transform', which admits computation by an algorithm that has complexity proportional to the fast Fourier transform algorithm. Whereas the discrete Fourier transform (DFT) is based on integral roots of unity e exp -2(pi)i/n, the fractional Fourier transform is based on fractional roots of unity e exp -2(pi)i(alpha), where alpha is arbitrary. The fractional Fourier transform and the corresponding fast algorithm are useful for such applications as computing DFTs of sequences with prime lengths, computing DFTs of sparse sequences, analyzing sequences with noninteger periodicities, performing high-resolution trigonometric interpolation, detecting lines in noisy images, and detecting signals with linearly drifting frequencies. In many cases, the resulting algorithms are faster by arbitrarily large factors than conventional techniques.
Graphics and Flow Visualization of Computer Generated Flow Fields
NASA Technical Reports Server (NTRS)
Kathong, M.; Tiwari, S. N.
1987-01-01
Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
Elliptic surface grid generation on minimal and parmetrized surfaces
NASA Technical Reports Server (NTRS)
Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.
1995-01-01
An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
On the feasibility to integrate low-cost MEMS accelerometers and GNSS receivers
NASA Astrophysics Data System (ADS)
Benedetti, Elisa; Dermanis, Athanasios; Crespi, Mattia
2017-06-01
The aim of this research was to investigate the feasibility of merging the benefits offered by low-cost GNSS and MEMS accelerometers technology, in order to promote the diffusion of low-cost monitoring solutions. A merging approach was set up at the level of the combination of kinematic results (velocities and displacements) coming from the two kinds of sensors, whose observations were separately processed, following to the so called loose integration, which sounds much more simple and flexible thinking about the possibility of an easy change of the combined sensors. At first, the issues related to the difference in reference systems, time systems and measurement rate and epochs for the two sensors were faced with. An approach was designed and tested to transform into unique reference and time systems the outcomes from GPS and MEMS and to interpolate the usually (much) more dense MEMS observation to common (GPS) epochs. The proposed approach was limited to time-independent (constant) orientation of the MEMS reference system with respect to the GPS one. Then, a data fusion approach based on the use of Discrete Fourier Transform and cubic splines interpolation was proposed both for velocities and displacements: MEMS and GPS derived solutions are firstly separated by a rectangular filter in spectral domain, and secondly back-transformed and combined through a cubic spline interpolation. Accuracies around 5 mm for slow and fast displacements and better than 2 mm/s for velocities were assessed. The obtained solution paves the way to a powerful and appealing use of low-cost single frequency GNSS receivers and MEMS accelerometers for structural and ground monitoring applications. Some additional remarks and prospects for future investigations complete the paper.
Radar imaging using electromagnetic wave carrying orbital angular momentum
NASA Astrophysics Data System (ADS)
Yuan, Tiezhu; Cheng, Yongqiang; Wang, Hongqiang; Qin, Yuliang; Fan, Bo
2017-03-01
The concept of radar imaging based on orbital angular momentum (OAM) modulation, which has the ability of azimuthal resolution without relative motion, has recently been proposed. We investigate this imaging technique further in greater detail. We first analyze the principle of the technique, accounting for its resolving ability physically. The phase and intensity distributions of the OAM-carrying fields produced by phased uniform circular array antenna, which have significant effects on the imaging results, are investigated. The imaging model shows that the received signal has the form of inverse discrete Fourier transform with the use of OAM and frequency diversities. The two-dimensional Fourier transform is employed to reconstruct the target images in the case of large and small elevation angles. Due to the peculiar phase and intensity characteristics, the small elevation is more suitable for practical application than the large one. The minimum elevation angle is then obtained given the array parameters. The imaging capability is analyzed by means of the point spread function. All results are verified through numerical simulations. The proposed staring imaging technique can achieve extremely high azimuthal resolution with the use of plentiful OAM modes.
[Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].
Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing
2003-12-01
Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.
NASA Astrophysics Data System (ADS)
Zhang, T.; Gordon, R. G.; Mishra, J. K.; Wang, C.
2017-12-01
The non-closure of the Cocos-Nazca-Pacific plate motion circuit by 15.0 mm a-1± 3.8 mm a-1 (95% confidence limits throughout this abstract) [DeMets et al. 2010] represents a daunting challenge to the central tenet of plate tectonics—that the plates are rigid. This misfit is difficult to explain from known processes of intraplate deformation, such as horizontal thermal contraction [Collette, 1974; Kumar and Gordon, 2009; Kreemer and Gordon, 2014; Mishra and Gordon, 2016] or movement of plates over a non-spherical Earth [McKenzie, 1972; Turcotte and Oxburgh, 1973]. Possibly there are one or more unrecognized plate boundaries in the circuit, but no such boundary has been found to date. To make progress on this problem, we present three new Cocos-Nazca transform fault azimuths from multibeam data now available through Geomapapp's global multi-resolution topography [Ryan et al., 2009]. We determine a new Cocos-Nazca best-fitting angular velocity from the three new transform-fault azimuths combined with the spreading rates of DeMets et al. [2010]. The new direction of relative plate motion is 3.3° ±1.8° clockwise of prior estimates and is 4.9° ±2.7° clockwise of the azimuth of the Panama transform fault, demonstrating that the Panama transform fault does not parallel Nazca-Cocos plate motion. We infer that the plate east of the Panama transform fault is not the Nazca plate, but instead is a microplate that we term the Malpelo plate. We hypothesize that a diffuse plate boundary separates the Malpelo plate from the much larger Nazca plate. The Malpelo plate extends only as far north as ≈6°N where seismicity marks another boundary with a previously recognized microplate, the Coiba plate [Pennington, 1981, Adamek et al., 1988]. The Malpelo plate moves 5.9 mm a-1 relative to the Nazca plate along the Panama transform fault. When we sum the Cocos-Pacific and Pacific-Nazca best-fitting angular velocities of DeMets et al. [2010] with our new Nazca-Cocos best-fitting angular velocity, we find a new linear velocity of non-closure of 11.6 mm a-1± 3.8 mm a-1, i.e., the non-closure is reduced by 3.4 mm a-1. The non-closure still seems too large to be due entirely to intraplate deformation and suggests that one or more additional plate boundaries remain to be discovered.
NASA Astrophysics Data System (ADS)
Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David
2016-03-01
In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.
Combining the Hanning windowed interpolated FFT in both directions
NASA Astrophysics Data System (ADS)
Chen, Kui Fu; Li, Yan Feng
2008-06-01
The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.
Software for Photometric and Astrometric Reduction of Video Meteors
NASA Astrophysics Data System (ADS)
Atreya, Prakash; Christou, Apostolos
2007-12-01
SPARVM is a Software for Photometric and Astrometric Reduction of Video Meteors being developed at Armagh Observatory. It is written in Interactive Data Language (IDL) and is designed to run primarily under Linux platform. The basic features of the software will be derivation of light curves, estimation of angular velocity and radiant position for single station data. For double station data, calculation of 3D coordinates of meteors, velocity, brightness, and estimation of meteoroid's orbit including uncertainties. Currently, the software supports extraction of time and date from video frames, estimation of position of cameras (Azimuth, Altitude), finding stellar sources in video frames and transformation of coordinates from video, frames to Horizontal coordinate system (Azimuth, Altitude), and Equatorial coordinate system (RA, Dec).
High resolution frequency analysis techniques with application to the redshift experiment
NASA Technical Reports Server (NTRS)
Decher, R.; Teuber, D.
1975-01-01
High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.
Transformation-cost time-series method for analyzing irregularly sampled data
NASA Astrophysics Data System (ADS)
Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen
2015-06-01
Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.
Transformation-cost time-series method for analyzing irregularly sampled data.
Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen
2015-06-01
Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.
Programmable remapper for image processing
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)
1991-01-01
A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.
Digital image transformation and rectification of spacecraft and radar images
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1985-01-01
The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.
Tie Points Extraction for SAR Images Based on Differential Constraints
NASA Astrophysics Data System (ADS)
Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.
2018-04-01
Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
Discrete Fourier transforms of nonuniformly spaced data
NASA Technical Reports Server (NTRS)
Swan, P. R.
1982-01-01
Time series or spatial series of measurements taken with nonuniform spacings have failed to yield fully to analysis using the Discrete Fourier Transform (DFT). This is due to the fact that the formal DFT is the convolution of the transform of the signal with the transform of the nonuniform spacings. Two original methods are presented for deconvolving such transforms for signals containing significant noise. The first method solves a set of linear equations relating the observed data to values defined at uniform grid points, and then obtains the desired transform as the DFT of the uniform interpolates. The second method solves a set of linear equations relating the real and imaginary components of the formal DFT directly to those of the desired transform. The results of numerical experiments with noisy data are presented in order to demonstrate the capabilities and limitations of the methods.
A General Surface Representation Module Designed for Geodesy
1980-06-01
one considers as a reasonable interpolation function, one of the often accepted compromises is the choice q = 2 (Schumnaker, 1976, Bybee and Bedross...Fast Fourier Transform: Englewood Cliffs, New Jersey. Bybee , J.E. and G.M. Bedross (1978): The IPIN computer network control softward. In: Proceedings
NASA Astrophysics Data System (ADS)
Cheng, Ju; Lu, Jian; Zhang, Hong-Chao; Lei, Feng; Sardar, Maryam; Bian, Xin-Tian; Zuo, Fen; Shen, Zhong-Hua; Ni, Xiao-Wu; Shi, Jin
2018-05-01
Not Available Supported by the National Natural Science Foundation of China under Grant No 11604115, the Educational Commissionof Jiangsu Province of China under Grant No 17KJA460004, and the Huaian Science and Technology Funds under Grant NoHAC201701.
GEOMETRIC PROCESSING OF DIGITAL IMAGES OF THE PLANETS.
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformations of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases.
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
The Coordinate Transformation Method of High Resolution dem Data
NASA Astrophysics Data System (ADS)
Yan, Chaode; Guo, Wang; Li, Aimin
2018-04-01
Coordinate transformation methods of DEM data can be divided into two categories. One reconstruct based on original vector elevation data. The other transforms DEM data blocks by transforming parameters. But the former doesn't work in the absence of original vector data, and the later may cause errors at joint places between adjoining blocks of high resolution DEM data. In view of this problem, a method dealing with high resolution DEM data coordinate transformation is proposed. The method transforms DEM data into discrete vector elevation points, and then adjusts positions of points by bi-linear interpolation respectively. Finally, a TIN is generated by transformed points, and the new DEM data in target coordinate system is reconstructed based on TIN. An algorithm which can find blocks and transform automatically is given in this paper. The method is tested in different terrains and proved to be feasible and valid.
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
3-component time-dependent crustal deformation in Southern California from Sentinel-1 and GPS
NASA Astrophysics Data System (ADS)
Tymofyeyeva, E.; Fialko, Y. A.
2017-12-01
We combine data from the Sentinel-1 InSAR mission collected between 2014-2017 with continuous GPS measurements to calculate the three components of the interseismic surface velocity field in Southern California at the resolution of InSAR data ( 100 m). We use overlapping InSAR tracks with two different look geometries (descending tracks 71, 173, and 144, and ascending tracks 64 and 166) to obtain the 3 orthogonal components of surface motion. Because of the under-determined nature of the problem, we use the local azimuth of the horizontal velocity vector as an additional constraint. The spatially variable azimuths of the horizontal velocity are obtained by interpolating data from the continuous GPS network. We estimate both secular velocities and displacement time series. The latter are obtained by combining InSAR time series from different lines of sight with time-dependent azimuths computed using continuous GPS time series at every InSAR epoch. We use the CANDIS method [Tymofyeyeva and Fialko, 2015], a technique based on iterative common point stacking, to correct the InSAR data for tropospheric and ionospheric artifacts when calculating secular velocities and time series, and to isolate low-amplitude deformation signals in our study region. The obtained horizontal (East and North) components of secular velocity exhibit long-wavelength patterns consistent with strain accumulation on major faults of the Pacific-North America plate boundary. The vertical component of velocity reveals a number of localized uplift and subsidence anomalies, most likely related to hydrologic effects and anthropogenic activity. In particular, in the Los Angeles basin we observe localized uplift of about 10-15mm/yr near Anaheim, Long Beach, and Redondo Beach, as well as areas of rapid subsidence near Irvine and Santa Monica, which are likely caused by the injection of water in the oil fields, and the pumping and recharge cycles of the aquifers in the basin.
NASA Astrophysics Data System (ADS)
Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek
2015-04-01
Geological and seismic structure under area of Poland is well studied by over one hundred thousand boreholes, over thirty deep seismic refraction and wide angle reflection profiles and by vertical seismic profiling, magnetic, gravity, magnetotelluric and thermal methods. Compilation of these studies allowed to create a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Polkowski et al. 2014). Model also provides details about the geometry of main layers of sediments (Tertiary and Quaternary, Cretaceous, Jurassic, Triassic, Permian, old Paleozoic), consolidated/crystalline crust (upper, middle and lower) and uppermost mantle. This model gives an unique opportunity for calculation synthetic receiver function and compering it with observed receiver function calculated for permanent and temporary seismic stations. Modified ray-tracing method (Langston, 1977) can be used directly to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. So, 3D P-wave velocity model has been interpolated to 2.5D P-wave velocity model beneath each seismic station and back-azimuthal sections of components of receiver function have been calculated. Vp/Vs ratio is assumed to be 1.8, 1.67, 1.73, 1.77 and 1.8 in the sediments, upper/middle/lower consolidated/crystalline crust and uppermost mantle, respectively. Densities were calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Additionally, to test a visibility of the lithosphere-asthenosphere boundary phases at receiver function sections models have been extended to 250 km depth based on P4-mantle model (Wilde-Piórko et al., 2010). National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284 and by NCN grant UMO-2011/01/B/ST10/06653.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
On the wall-normal velocity of the compressible boundary-layer equations
NASA Technical Reports Server (NTRS)
Pruett, C. David
1991-01-01
Numerical methods for the compressible boundary-layer equations are facilitated by transformation from the physical (x,y) plane to a computational (xi,eta) plane in which the evolution of the flow is 'slow' in the time-like xi direction. The commonly used Levy-Lees transformation results in a computationally well-behaved problem for a wide class of non-similar boundary-layer flows, but it complicates interpretation of the solution in physical space. Specifically, the transformation is inherently nonlinear, and the physical wall-normal velocity is transformed out of the problem and is not readily recovered. In light of recent research which shows mean-flow non-parallelism to significantly influence the stability of high-speed compressible flows, the contribution of the wall-normal velocity in the analysis of stability should not be routinely neglected. Conventional methods extract the wall-normal velocity in physical space from the continuity equation, using finite-difference techniques and interpolation procedures. The present spectrally-accurate method extracts the wall-normal velocity directly from the transformation itself, without interpolation, leaving the continuity equation free as a check on the quality of the solution. The present method for recovering wall-normal velocity, when used in conjunction with a highly-accurate spectral collocation method for solving the compressible boundary-layer equations, results in a discrete solution which is extraordinarily smooth and accurate, and which satisfies the continuity equation nearly to machine precision. These qualities make the method well suited to the computation of the non-parallel mean flows needed by spatial direct numerical simulations (DNS) and parabolized stability equation (PSE) approaches to the analysis of stability.
NASA Astrophysics Data System (ADS)
Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu
2015-04-01
For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.
Spectral Topography Generation for Arbitrary Grids
NASA Astrophysics Data System (ADS)
Oh, T. J.
2015-12-01
A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).
NASA Astrophysics Data System (ADS)
Do, Seongju; Li, Haojun; Kang, Myungjoo
2017-06-01
In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Topographic relationships for design rainfalls over Australia
NASA Astrophysics Data System (ADS)
Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.
2016-02-01
Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge
Koltun, G.F.; Gray, John R.; McElhone, T.J.
1994-01-01
Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.
Mapping wildfire effects on Ca2+ and Mg2+ released from ash. A microplot analisis.
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Úbeda, Xavier; Martin, Deborah
2010-05-01
Wildland fires have important implications in ecosystems dynamic. Their effects depends on many biophysical components, mainly burned specie, ecosystem affected, amount and spatial distribution of the fuel, relative humidity, slope, aspect and time of residence. These parameters are heterogenic across the landscape, producing a complex mosaic of severities. Wildland fires have a heterogenic impact on ecosystems due their diverse biophysical features. It is widely known that fire impacts can change rapidly even in short distances, producing at microplot scale highly spatial variation. Also after a fire, the most visible thing is ash and his physical and chemical properties are of main importance because here reside the majority of the available nutrients available to the plants. Considering this idea, is of major importance, study their characteristics in order to observe the type and amount of elements available to plants. This study is focused on the study of the spatial variability of two nutrients essential to plant growth, Ca2+ and Mg2+, released from ash after a wildfire at microplot scale. The impacts of fire are highly variable even small distances. This creates many problems at the hour of map the effects of fire in the release of the studied elements. Hence is of major priority identify the less biased interpolation method in order to predict with great accuracy the variable in study. The aim of this study is map the effects of wildfire on the referred elements released from ash at microplot scale, testing several interpolation methods. 16 interpolation techniques were tested, Inverse Distance to a Weight (IDW), with the with the weights of 1,2, 3, 4 and 5, Local Polynomial, with the power of 1 (LP1) and 2 (LP2), Polynomial Regression (PR), Radial Basis Functions, especially, Spline With Tension (SPT), Completely Regularized Spline (CRS), Multiquadratic (MTQ), Inverse Multiquadratic (MTQ), and Thin Plate Spline (TPS). Also geostatistical methods were tested from Kriging family, mainly Ordinary Kriging (OK), Simple Kriging (SK) and Universal Kriging (UK). Interpolation techniques were assessed throughout the Mean Error (ME) and Root Mean Square (RMSE), obtained from the cross validation procedure calculated in all methods. The fire occurred in Portugal, near an urban area and inside the affected area we designed a grid with the dimensions of 9 x 27 m and we collected 40 samples. Before modelling data, we tested their normality with the Shapiro Wilk test. Since the distributions of Ca2+ and Mg2+ did not respect the gaussian distribution we transformed data logarithmically (Ln). With this transformation, data respect the normality and spatial distribution was modelled with the transformed data. On average in the entire plot the ash slurries contained 4371.01 mg/l of Ca2+, however with a higher coefficient of variation (CV%) of 54.05%. From all the tested methods LP1 was the less biased and hence the most accurate to interpolate this element. The most biased was LP2. In relation to Mg2+, considering the entire plot, the ash released in solution on average 1196.01 mg/l, with a CV% of 52.36%, similar to the identified in Ca2+. The best interpolator in this case was SK and the most biased was LP1 and TPS. Comparing all methods in both elements, the quality of the interpolations was higher in Ca2+. These results allowed us to conclude that to achieve the best prediction it is necessary test a wide range of interpolation methods. The best accuracy will permit us to understand with more precision where the studied elements are more available and accessible to plant growth and ecosystem recovers. This spatial pattern of both nutrients is related with ash pH and burned severity evaluated from ash colour and CaCO3 content. These aspects will be also discussed in the work.
NASA Astrophysics Data System (ADS)
Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.
2016-05-01
Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.
Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans
2014-08-01
Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.
Control and design heat flux bending in thermal devices with transformation optics.
Xu, Guoqiang; Zhang, Haochun; Jin, Yan; Li, Sen; Li, Yao
2017-04-17
We propose a fundamental latent function of control heat transfer and heat flux density vectors at random positions on thermal materials by applying transformation optics. The expressions for heat flux bending are obtained, and the factors influencing them are investigated in both 2D and 3D cloaking schemes. Under certain conditions, more than one degree of freedom of heat flux bending exists corresponding to the temperature gradients of the 3D domain. The heat flux path can be controlled in random space based on the geometrical azimuths, radial positions, and thermal conductivity ratios of the selected materials.
Guided filter and principal component analysis hybrid method for hyperspectral pansharpening
NASA Astrophysics Data System (ADS)
Qu, Jiahui; Li, Yunsong; Dong, Wenqian
2018-01-01
Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.
Formation of propagation invariant laser beams with anamorphic optical systems
NASA Astrophysics Data System (ADS)
Soskind, Y. G.
2015-03-01
Propagation invariant structured laser beams play an important role in several photonics applications. A majority of propagation invariant beams are usually produced in the form of laser modes emanating from stable laser cavities. This work shows that anamorphic optical systems can be effectively employed to transform input propagation invariant laser beams and produce a variety of alternative propagation invariant structured laser beam distributions with different shapes and phase structures. This work also presents several types of anamorphic lens systems suitable for transforming the input laser modes into a variety of structured propagation invariant beams. The transformations are applied to different laser mode types, including Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian field distributions. The influence of the relative azimuthal orientation between the input laser modes and the anamorphic optical systems on the resulting transformed propagation invariant beams is presented as well.
NASA Astrophysics Data System (ADS)
Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.
2017-11-01
Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.
Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging
NASA Astrophysics Data System (ADS)
Wang, Zong; Shi, Wenjiao
2017-03-01
Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.
2012-08-15
Environmental Model ( GDEM ) 72 levels) was conserved in the interpolated profiles and small variations in the vertical field may have lead to large...Planner ETKF Ensemble Transform Kalman Filter G8NCOM 1/8⁰ Global NCOM GA Genetic Algorithm GDEM Generalized Digital Environmental Model GOST
Measurements of Wave Power in Wave Energy Converter Effectiveness Evaluation
NASA Astrophysics Data System (ADS)
Berins, J.; Berins, J.; Kalnacs, A.
2017-08-01
The article is devoted to the technical solution of alternative budget measuring equipment of the water surface gravity wave oscillation and the theoretical justification of the calculated oscillation power. This solution combines technologies such as lasers, WEB-camera image digital processing, interpolation of defined function at irregular intervals, volatility of discrete Fourier transformation for calculating the spectrum.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naunyka, V. N.; Shepelevich, V. V., E-mail: vasshep@inbox.ru
2011-05-15
The mutual transformation of light waves in the case of their simultaneous diffraction from a bulk reflection phase hologram, which was formed in a cubic photorefractive crystal of the 4-bar 3m symmetry class, has been studied. The indicator surfaces of the polarization-optimized values of the relative intensity of the object wave, which make it possible to determine the amplification of this wave for any crystal cut, are constructed. The linear polarization azimuths at which the energy exchange between the light waves reaches a maximum are found numerically for crystals of different cuts.
Zernike Basis to Cartesian Transformations
NASA Astrophysics Data System (ADS)
Mathar, R. J.
2009-12-01
The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.
Modeling of earthquake ground motion in the frequency domain
NASA Astrophysics Data System (ADS)
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation model is assessed using data from the SMART-1 array in Taiwan. The interpolation model provides an effective method to estimate ground motion at a site using recordings from stations located up to several kilometers away. Reliable estimates of differential ground motion are restricted to relatively limited ranges of frequencies and inter-station spacings.
Geometric processing of digital images of the planets
NASA Technical Reports Server (NTRS)
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor); Smith, Jeffrey Scott (Inventor); Aronstein, David L. (Inventor)
2012-01-01
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for simulating propagation of an electromagnetic field, performing phase retrieval, or sampling a band-limited function. A system practicing the method generates transformed data using a discrete Fourier transform which samples a band-limited function f(x) without interpolating or modifying received data associated with the function f(x), wherein an interval between repeated copies in a periodic extension of the function f(x) obtained from the discrete Fourier transform is associated with a sampling ratio Q, defined as a ratio of a sampling frequency to a band-limited frequency, and wherein Q is assigned a value between 1 and 2 such that substantially no aliasing occurs in the transformed data, and retrieves a phase in the received data based on the transformed data, wherein the phase is used as feedback to an optical system.
Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob
2010-02-01
Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.
Spontaneous and persistent currents in superconductive and mesoscopic structures (Review)
NASA Astrophysics Data System (ADS)
Kulik, I. O.
2004-07-01
We briefly review aspects of superconductive persistent currents in Josephson junctions of the S/I/S, S/O/S and S/N/S types, focusing on the origin of jumps in the current versus phase dependences, and discuss in more detail the persistent and the "spontaneous" currents in Aharonov-Bohm mesoscopic and nanoscopic (macromolecular) structures. A fixed-number-of-electrons mesoscopic or macromolecular conducting ring is shown to be unstable against structural transformation removing spatial symmetry (in particular, azimuthal periodicity) of its electron-lattice Hamiltonian. In the case when the transformation is blocked by strong coupling to an external azimuthally symmetric environment, the system becomes bistable in its electronic configuration at a certain number of electrons. Under such a condition, the persistent current has a nonzero value even at an (almost) zero applied Aharonov-Bohm flux and results in very high magnetic susceptibility dM/dH at small nonzero fields, followed by an oscillatory dependence at larger fields. We tentatively assume that previously observed oscillatory magnetization in cyclic metallo-organic molecules by Gatteschi et al. can be attributed to persistent currents. If this proves correct, it may present an opportunity for (and, more generally, macromolecular cyclic structures may suggest the possibility of) engineering quantum computational tools based on the Aharonov-Bohm effect in ballistic nanostructures and macromolecular cyclic aggregates.
Measurement of the UH-60A Hub Large Rotor Test Apparatus Control System Stiffness
NASA Technical Reports Server (NTRS)
Kufeld, Robert M.
2014-01-01
This purpose of this report is to provides details of the measurement of the control system stiffness of the UH-60A rotor hub mounted on the Large Rotor Test Apparatus (UH-60A/LRTA). The UH-60A/LRTA was used in the 40- by 80-Foot Wind Tunnel to complete the full-scale wind tunnel test portion of the NASA / ARMY UH-60A Airloads Program. This report describes the LRTA control system and highlights the differences between the LRTA and UH-60A aircraft. The test hardware, test setup, and test procedures are also described. Sample results are shown, including the azimuthal variation of the measured control system stiffness for three different loadings and two different dynamic actuator settings. Finally, the azimuthal stiffness is converted to fixed system values using multi-blade transformations for input to comprehensive rotorcraft prediction codes.
A Measuring System for Well Logging Attitude and a Method of Sensor Calibration
Ren, Yong; Wang, Yangdong; Wang, Mijian; Wu, Sheng; Wei, Biao
2014-01-01
This paper proposes an approach for measuring the azimuth angle and tilt angle of underground drilling tools with a MEMS three-axis accelerometer and a three-axis fluxgate sensor. A mathematical model of well logging attitude angle is deduced based on combining space coordinate transformations and algebraic equations. In addition, a system implementation plan of the inclinometer is given in this paper, which features low cost, small volume and integration. Aiming at the sensor and assembly errors, this paper analyses the sources of errors, and establishes two mathematical models of errors and calculates related parameters to achieve sensor calibration. The results show that this scheme can obtain a stable and high precision azimuth angle and tilt angle of drilling tools, with the deviation of the former less than ±1.4° and the deviation of the latter less than ±0.1°. PMID:24859028
A measuring system for well logging attitude and a method of sensor calibration.
Ren, Yong; Wang, Yangdong; Wang, Mijian; Wu, Sheng; Wei, Biao
2014-05-23
This paper proposes an approach for measuring the azimuth angle and tilt angle of underground drilling tools with a MEMS three-axis accelerometer and a three-axis fluxgate sensor. A mathematical model of well logging attitude angle is deduced based on combining space coordinate transformations and algebraic equations. In addition, a system implementation plan of the inclinometer is given in this paper, which features low cost, small volume and integration. Aiming at the sensor and assembly errors, this paper analyses the sources of errors, and establishes two mathematical models of errors and calculates related parameters to achieve sensor calibration. The results show that this scheme can obtain a stable and high precision azimuth angle and tilt angle of drilling tools, with the deviation of the former less than ±1.4° and the deviation of the latter less than ±0.1°.
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Boychuk, T. M.; Mincer, O. P.; Bodnar, G. B.; Kushnerick, L. Ya.; Savich, V. O.
2013-12-01
The bases of method of the space-frequency of the filtering phase allocation of blood plasma pellicle are given here. The model of the optical-anisotropic properties of the albumen chain of blood plasma pellicle with regard to linear and circular double refraction of albumen and globulin crystals is proposed. Comparative researches of the effectiveness of methods of the direct polarized mapping of the azimuth images of blood plasma pcllicle layers and space-frequency polarimetry of the laser radiation transformed by divaricate and holelikc optical-anisotropic chains of blood plasma pellicles were held. On the basis of the complex statistic, correlative and fracta.1 analysis of the filtered frcquencydimensional polarizing azimuth maps of the blood plasma pellicles structure a set of criteria of the change of the double refraction of the albumen chains caused by the prostate cancer was traced and proved.
Review of image processing fundamentals
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1985-01-01
Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.
Application of cokriging techniques for the estimation of hail size
NASA Astrophysics Data System (ADS)
Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier
2018-01-01
There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
The angular difference function and its application to image registration.
Keller, Yosi; Shkolnisky, Yoel; Averbuch, Amir
2005-06-01
The estimation of large motions without prior knowledge is an important problem in image registration. In this paper, we present the angular difference function (ADF) and demonstrate its applicability to rotation estimation. The ADF of two functions is defined as the integral of their spectral difference along the radial direction. It is efficiently computed using the pseudopolar Fourier transform, which computes the discrete Fourier transform of an image on a near spherical grid. Unlike other Fourier-based registration schemes, the suggested approach does not require any interpolation. Thus, it is more accurate and significantly faster.
Exocentric direction judgements in computer-generated displays and actual scenes
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.
1989-01-01
One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.
NASA Astrophysics Data System (ADS)
Khan, Aamir; Shah, Rehan Ali; Shuaib, Muhammad; Ali, Amjad
2018-06-01
The effects of magnetic field dependent (MFD) thermosolutal convection and MFD viscosity of the fluid dynamics are investigated between squeezing discs rotating with different velocities. The unsteady constitutive expressions of mass conservation, modified Navier-Stokes, Maxwell and MFD thermosolutal convection are coupled as a system of ordinary differential equations. The corresponding solutions for the transformed radial and azimuthal momentum as well as solutions for the azimuthal and axial induced magnetic field equations are determined, also the MHD pressure and torque which the fluid exerts on the upper disc is derived and discussed in details. In the case of smooth discs the self-similar equations are solved using Homotopy Analysis Method (HAM) with appropriate initial guesses and auxiliary parameters to produce an algorithm with an accelerated and assured convergence. The validity and accuracy of HAM results is proved by comparison of the HAM solutions with numerical solver package BVP4c. It has been shown that magnetic Reynolds number causes to decrease magnetic field distributions, fluid temperature, axial and tangential velocity. Also azimuthal and axial components of magnetic field have opposite behavior with increase in MFD viscosity. Applications of the study include automotive magneto-rheological shock absorbers, novel aircraft landing gear systems, heating up or cooling processes, biological sensor systems and biological prosthetic etc.
Asten, M.W.; Stephenson, William J.; Hartzell, Stephen
2015-01-01
The SPAC method of processing microtremor noise observations for estimation of Vs profiles has a limitation that the array has circular or triangular symmetry in order to allow spatial (azimuthal) averaging of inter-station coherencies over a constant station separation. Common processing methods allow for station separations to vary by typically ±10% in the azimuthal averaging before degradation of the SPAC spectrum is excessive. A limitation on use of high-wavenumbers in inversions of SPAC spectra to Vs profiles has been the requirement for exact array symmetry to avoid loss of information in the azimuthal averaging step. In this paper we develop a new wavenumber-normalised SPAC method (KRSPAC) where instead of performing averaging of sets of coherency versus frequency spectra and then fitting to a model SPAC spectrum, we interpolate each spectrum to coherency versus k.r, where k and r are wavenumber and station separation respectively, and r may be different for each pair of stations. For fundamental mode Rayleigh-wave energy the model SPAC spectrum to be fitted reduces to Jo(kr). The normalization process changes with each iteration since k is a function of frequency and phase velocity and hence is updated each iteration. The method proves robust and is demonstrated on data acquired in the Santa Clara Valley, CA, (Site STGA) where an asymmetric array having station separations varying by a factor of 2 is compared with a conventional triangular array; a 300-mdeep borehole with a downhole Vs log provides nearby ground truth. The method is also demonstrated on data from the Pleasanton array, CA, where station spacings are irregular and vary from 400 to 1200 m. The KRSPAC method allows inversion of data using kr (unitless) values routinely up to 30, and occasionally up to 60. Thus despite the large and irregular station spacings, this array permits resolution of Vs as fine as 15 m for the near-surface sediments, and down to a maximum depth of 2.5 km.
The guidance methodology of a new automatic guided laser theodolite system
NASA Astrophysics Data System (ADS)
Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua
2008-12-01
Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.
Global boundary flattening transforms for acoustic propagation under rough sea surfaces.
Oba, Roger M
2010-07-01
This paper introduces a conformal transform of an acoustic domain under a one-dimensional, rough sea surface onto a domain with a flat top. This non-perturbative transform can include many hundreds of wavelengths of the surface variation. The resulting two-dimensional, flat-topped domain allows direct application of any existing, acoustic propagation model of the Helmholtz or wave equation using transformed sound speeds. Such a transform-model combination applies where the surface particle velocity is much slower than sound speed, such that the boundary motion can be neglected. Once the acoustic field is computed, the bijective (one-to-one and onto) mapping permits the field interpolation in terms of the original coordinates. The Bergstrom method for inverse Riemann maps determines the transform by iterated solution of an integral equation for a surface matching term. Rough sea surface forward scatter test cases provide verification of the method using a particular parabolic equation model of the Helmholtz equation.
Chiral symmetry breaking and the spin content of the ρ and ρ‧ mesons
NASA Astrophysics Data System (ADS)
Glozman, L. Ya.; Lang, C. B.; Limmer, M.
2011-11-01
Using interpolators with different SU(2)L × SU(2)R transformation properties we study the chiral symmetry and spin contents of the ρ and ρ‧ mesons in lattice simulations with dynamical quarks. A ratio of couplings of the qbarγi τq and qbarσ0i τq interpolators to a given meson state at different resolution scales tells one about the degree of chiral symmetry breaking in the meson wave function at these scales. Using a Gaussian gauge invariant smearing of the quark fields in the interpolators, we are able to extract the chiral content of mesons up to the infrared resolution of ∼ 1 fm. In the ground state ρ meson the chiral symmetry is strongly broken with comparable contributions of both the (0 , 1) + (1 , 0) and (1 / 2 , 1 / 2) b chiral representations with the former being the leading contribution. In contrast, in the ρ‧ meson the degree of chiral symmetry breaking is manifestly smaller and the leading representation is (1 / 2 , 1 / 2) b. Using a unitary transformation from the chiral basis to the LJ2S+1 basis, we are able to define and measure the angular momentum content of mesons in the rest frame. This definition is different from the traditional one which uses parton distributions in the infinite momentum frame. The ρ meson is practically a 3S1 state with no obvious trace of a "spin crisis". The ρ‧ meson has a sizeable contribution of the 3D1 wave, which implies that the ρ‧ meson cannot be considered as a pure radial excitation of the ρ meson.
NASA Astrophysics Data System (ADS)
Eakin, Caroline M.; Rychert, Catherine A.; Harmon, Nicholas
2018-02-01
Mantle anisotropy beneath mid-ocean ridges and oceanic transforms is key to our understanding of seafloor spreading and underlying dynamics of divergent plate boundaries. Observations are sparse, however, given the remoteness of the oceans and the difficulties of seismic instrumentation. To overcome this, we utilize the global distribution of seismicity along transform faults to measure shear wave splitting of over 550 direct S phases recorded at 56 carefully selected seismic stations worldwide. Applying this source-side splitting technique allows for characterization of the upper mantle seismic anisotropy, and therefore the pattern of mantle flow, directly beneath seismically active transform faults. The majority of the results (60%) return nulls (no splitting), while the non-null measurements display clear azimuthal dependency. This is best simply explained by anisotropy with a near vertical symmetry axis, consistent with mantle upwelling beneath oceanic transforms as suggested by numerical models. It appears therefore that the long-term stability of seafloor spreading may be associated with widespread mantle upwelling beneath the transforms creating warm and weak faults that localize strain to the plate boundary.
Single-Chip FPGA Azimuth Pre-Filter for SAR
NASA Technical Reports Server (NTRS)
Gudim, Mimi; Cheng, Tsan-Huei; Madsen, Soren; Johnson, Robert; Le, Charles T-C; Moghaddam, Mahta; Marina, Miguel
2005-01-01
A field-programmable gate array (FPGA) on a single lightweight, low-power integrated-circuit chip has been developed to implement an azimuth pre-filter (AzPF) for a synthetic-aperture radar (SAR) system. The AzPF is needed to enable more efficient use of data-transmission and data-processing resources: In broad terms, the AzPF reduces the volume of SAR data by effectively reducing the azimuth resolution, without loss of range resolution, during times when end users are willing to accept lower azimuth resolution as the price of rapid access to SAR imagery. The data-reduction factor is selectable at a decimation factor, M, of 2, 4, 8, 16, or 32 so that users can trade resolution against processing and transmission delays. In principle, azimuth filtering could be performed in the frequency domain by use of fast-Fourier-transform processors. However, in the AzPF, azimuth filtering is performed in the time domain by use of finite-impulse-response filters. The reason for choosing the time-domain approach over the frequency-domain approach is that the time-domain approach demands less memory and a lower memory-access rate. The AzPF operates on the raw digitized SAR data. The AzPF includes a digital in-phase/quadrature (I/Q) demodulator. In general, an I/Q demodulator effects a complex down-conversion of its input signal followed by low-pass filtering, which eliminates undesired sidebands. In the AzPF case, the I/Q demodulator takes offset video range echo data to the complex baseband domain, ensuring preservation of signal phase through the azimuth pre-filtering process. In general, in an SAR I/Q demodulator, the intermediate frequency (fI) is chosen to be a quarter of the range-sampling frequency and the pulse-repetition frequency (fPR) is chosen to be a multiple of fI. The AzPF also includes a polyphase spatial-domain pre-filter comprising four weighted integrate-and-dump filters with programmable decimation factors and overlapping phases. To prevent aliasing of signals, the bandwidth of the AzPF is made 80 percent of fPR/M. The choice of four as the number of overlapping phases is justified by prior research in which it was shown that a filter of length 4M can effect an acceptable transfer function. The figure depicts prototype hardware comprising the AzPF and ancillary electronic circuits. The hardware was found to satisfy performance requirements in real-time tests at a sampling rate of 100 MHz.
Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks
NASA Astrophysics Data System (ADS)
Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas
2017-03-01
PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.
Interpolated Sounding and Gridded Sounding Value-Added Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toto, T.; Jensen, M.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. P. Jensen; Toto, T.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less
Satellite on-board real-time SAR processor prototype
NASA Astrophysics Data System (ADS)
Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François
2017-11-01
A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
Fast restoration approach for motion blurred image based on deconvolution under the blurring paths
NASA Astrophysics Data System (ADS)
Shi, Yu; Song, Jie; Hua, Xia
2015-12-01
For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.
High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform
Chan, Kenny K. H.; Tang, Shuo
2010-01-01
The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551
High Grazing Angle and High Resolution Sea Clutter: Correlation and Polarisation Analyses
2007-03-01
the azimuthal correlation. The correlation between the HH and VV sea clutter data is low. A CA-CFAR ( cell average constant false-alarm rate...to calculate the power spectra of correlation profiles. The frequency interval of the traditional Discrete Fourier Transform is NT1 Hz, where N and...sea spikes, the Entropy-Alpha decomposition of sea spikes is shown in Figure 30. The process first locates spikes using a cell -average constant false
NASA Technical Reports Server (NTRS)
Elson, Lee S.; Froidevaux, Lucien
1993-01-01
Fourier analysis has been applied to data obtained from limb viewing instruments on the Upper Atmosphere Research Satellite. A coordinate system rotation facilitates the efficient computation of Fourier transforms in the temporal and longitudinal domains. Fields such as ozone (O3), chlorine monoxide (ClO), temperature, and water vapor have been transformed by this process. The transforms have been inverted to provide maps of these quantities at selected times, providing a method of accurate time interpolation. Maps obtained by this process show evidence of both horizontal and vertical transport of important trace species such as O3 and ClO. An examination of the polar regions indicates that large-scale planetary variations are likely to play a significant role in transporting midstratospheric O3 into the polar regions. There is also evidence that downward transport occurs, providing a means of moving O3 into the polar vortex at lower altitudes. The transforms themselves show the structure and propagation characteristics of wave variations.
"Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li+-benzene
NASA Astrophysics Data System (ADS)
D'Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.
2015-08-01
Quantum and anharmonic effects are investigated in (H2)2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li+-benzene complex increases the ZPE of the system by 5.6 kJ mol-1 to 17.6 kJ mol-1. This ZPE is 42% of the total electronic binding energy of (H2)2-Li+-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li+-benzene is 7.7 kJ mol-1, compared to 12.4 kJ mol-1 for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li+ ion and are more confined in the θ coordinate than in H2-Li+-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li+-benzene PESs are developed. These use a modified Shepard interpolation for the Li+-benzene and H2-Li+-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li+ terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol-1. Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol-1 error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.
"Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li(+)-benzene.
D'Arcy, Jordan H; Kolmann, Stephen J; Jordan, Meredith J T
2015-08-21
Quantum and anharmonic effects are investigated in (H2)2-Li(+)-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li(+)-benzene complex increases the ZPE of the system by 5.6 kJ mol(-1) to 17.6 kJ mol(-1). This ZPE is 42% of the total electronic binding energy of (H2)2-Li(+)-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li(+)-benzene is 7.7 kJ mol(-1), compared to 12.4 kJ mol(-1) for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li(+) ion and are more confined in the θ coordinate than in H2-Li(+)-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li(+)-benzene PESs are developed. These use a modified Shepard interpolation for the Li(+)-benzene and H2-Li(+)-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li(+) terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol(-1). Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol(-1) error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.
Trajectory Optimization for Helicopter Unmanned Aerial Vehicles (UAVs)
2010-06-01
the Nth-order derivative of the Legendre Polynomial ( )NL t . Using this method, the range of integration is transformed universally to [-1,+1...which is the interval for Legendre Polynomials . Although the LGL interpolation points are not evenly spaced, they are symmetric about the midpoint 0...the vehicle’s kinematic constraints are parameterized in terms of polynomials of sufficient order, (2) A collision-free criterion is developed and
Image registration method for medical image sequences
Gee, Timothy F.; Goddard, James S.
2013-03-26
Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Cryo-EM image alignment based on nonuniform fast Fourier transform.
Yang, Zhengfan; Penczek, Pawel A
2008-08-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform
Yang, Zhengfan; Penczek, Pawel A.
2008-01-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.
NASA Astrophysics Data System (ADS)
Samlan, C. T.; Naik, Dinesh N.; Viswanathan, Nirmal K.
2016-09-01
Discovered in 1813, the conoscopic interference pattern observed due to light propagating through a crystal, kept between crossed polarizers, shows isochromates and isogyres, respectively containing information about the dynamic and geometric phase acquired by the beam. We propose and demonstrate a closed-fringe Fourier analysis method to disentangle the isogyres from the isochromates, leading us to the azimuthally varying geometric phase and its manifestation as isogyres. This azimuthally varying geometric phase is shown to be the underlying mechanism for the spin-to-orbital angular momentum conversion observed in a diverging optical field propagating through a z-cut uniaxial crystal. We extend the formalism to study the optical activity mediated uniaxial-to-biaxial transformation due to a weak transverse electric field applied across the crystal. Closely associated with the phase and polarization singularities of the optical field, the formalism enables us to understand crystal optics in a new way, paving the way to anticipate several emerging phenomena.
Samlan, C T; Naik, Dinesh N; Viswanathan, Nirmal K
2016-09-14
Discovered in 1813, the conoscopic interference pattern observed due to light propagating through a crystal, kept between crossed polarizers, shows isochromates and isogyres, respectively containing information about the dynamic and geometric phase acquired by the beam. We propose and demonstrate a closed-fringe Fourier analysis method to disentangle the isogyres from the isochromates, leading us to the azimuthally varying geometric phase and its manifestation as isogyres. This azimuthally varying geometric phase is shown to be the underlying mechanism for the spin-to-orbital angular momentum conversion observed in a diverging optical field propagating through a z-cut uniaxial crystal. We extend the formalism to study the optical activity mediated uniaxial-to-biaxial transformation due to a weak transverse electric field applied across the crystal. Closely associated with the phase and polarization singularities of the optical field, the formalism enables us to understand crystal optics in a new way, paving the way to anticipate several emerging phenomena.
NASA Astrophysics Data System (ADS)
Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang
2017-07-01
Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
Development of adaptive observation strategy using retrospective optimal interpolation
NASA Astrophysics Data System (ADS)
Noh, N.; Kim, S.; Song, H.; Lim, G.
2011-12-01
Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.
Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition
NASA Technical Reports Server (NTRS)
Kenwright, David; Lane, David
1995-01-01
An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Azimuth Check: An Analysis of Military Transformation in the Republic of Korea-is it Sufficient
2009-12-01
States (US) forces organized, trained, and equipped to deal with a complex and ambiguous collapse of the Democratic People’s Republic of 1 Yong -sup...Han, “Analyzing South Korea’s Defense Reform 2020,” The Korean Journal of Defense Analysis 18, no. 1 (Spring 2006): 111. 2 Yong -sup Han, 115. Han...2006, US Secretary of Defense Donald Rumsfeld and ROK Defense Minister Yoon Kwang- Ung jointly announced transition of wartime operational control
Using an electronic compass to determine telemetry azimuths
Cox, R.R.; Scalf, J.D.; Jamison, B.E.; Lutz, R.S.
2002-01-01
Researchers typically collect azimuths from known locations to estimate locations of radiomarked animals. Mobile, vehicle-mounted telemetry receiving systems frequently are used to gather azimuth data. Use of mobile systems typically involves estimating the vehicle's orientation to grid north (vehicle azimuth), recording an azimuth to the transmitter relative to the vehicle azimuth from a fixed rosette around the antenna mast (relative azimuth), and subsequently calculating an azimuth to the transmitter (animal azimuth). We incorporated electronic compasses into standard null-peak antenna systems by mounting the compass sensors atop the antenna masts and evaluated the precision of this configuration. This system increased efficiency by eliminating vehicle orientation and calculations to determine animal azimuths and produced estimates of precision (azimuth SD=2.6 deg., SE=0.16 deg.) similar to systems that required orienting the mobile system to grid north. Using an electronic compass increased efficiency without sacrificing precision and should produce more accurate estimates of locations when marked animals are moving or when vehicle orientation is problematic.
User’s Manual for the Modular Analysis-Package Libraries ANAPAC and TRANL
1977-09-01
number) Computer software Fourier transforms Computer software library Interpolation software Digitized data...disregarded to give the user a simplified plot. (b) The last digit of ISPACE determines the type of line to be drawn, provided KODE is not...negative. If the last digit of ISPACE is 0 a solid line is drawn 1 a dashed line is drawn - - - 2 a dotted line is drawn .... 3 a dash-dot line is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1988-07-01
A new semiempirical method that significantly simplifies atomic mass systematics and which provides a method for making mass predictions by linear interpolation is discussed in the context of the nuclear valence space. In certain regions complicated patterns of mass systematics in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences.
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
Groundwater contaminant plume maps and volumes, 100-K and 100-N Areas, Hanford Site, Washington
Johnson, Kenneth H.
2016-09-27
This study provides an independent estimate of the areal and volumetric extent of groundwater contaminant plumes which are affected by waste disposal in the 100-K and 100-N Areas (study area) along the Columbia River Corridor of the Hanford Site. The Hanford Natural Resource Trustee Council requested that the U.S. Geological Survey perform this interpolation to assess the accuracy of delineations previously conducted by the U.S. Department of Energy and its contractors, in order to assure that the Natural Resource Damage Assessment could rely on these analyses. This study is based on previously existing chemical (or radionuclide) sampling and analysis data downloaded from publicly available Hanford Site Internet sources, geostatistically selected and interpreted as representative of current (from 2009 through part of 2012) but average conditions for groundwater contamination in the study area. The study is limited in scope to five contaminants—hexavalent chromium, tritium, nitrate, strontium-90, and carbon-14, all detected at concentrations greater than regulatory limits in the past.All recent analytical concentrations (or activities) for each contaminant, adjusted for radioactive decay, non-detections, and co-located wells, were converted to log-normal distributions and these transformed values were averaged for each well location. The log-normally linearized well averages were spatially interpolated on a 50 × 50-meter (m) grid extending across the combined 100-N and 100-K Areas study area but limited to avoid unrepresentative extrapolation, using the minimum curvature geostatistical interpolation method provided by SURFER®data analysis software. Plume extents were interpreted by interpolating the log-normally transformed data, again using SURFER®, along lines of equal contaminant concentration at an appropriate established regulatory concentration . Total areas for each plume were calculated as an indicator of relative environmental damage. These plume extents are shown graphically and in tabular form for comparison to previous estimates. Plume data also were interpolated to a finer grid (10 × 10 m) for some processing, particularly to estimate volumes of contaminated groundwater. However, hydrogeologic transport modeling was not considered for the interpolation. The compilation of plume extents for each contaminant also allowed estimates of overlap of the plumes or areas with more than one contaminant above regulatory standards.A mapping of saturated aquifer thickness also was derived across the 100-K and 100–N study area, based on the vertical difference between the groundwater level (water table) at the top and the altitude of the top of the Ringold Upper Mud geologic unit, considered the bottom of the uppermost unconfined aquifer. Saturated thickness was calculated for each cell in the finer (10 × 10 m) grid. The summation of the cells’ saturated thickness values within each polygon of plume regulatory exceedance provided an estimate of the total volume of contaminated aquifer, and the results also were checked using a SURFER® volumetric integration procedure. The total volume of contaminated groundwater in each plume was derived by multiplying the aquifer saturated thickness volume by a locally representative value of porosity (0.3).Estimates of the uncertainty of the plume delineation also are presented. “Upper limit” plume delineations were calculated for each contaminant using the same procedure as the “average” plume extent except with values at each well that are set at a 95-percent upper confidence limit around the log-normally transformed mean concentrations, based on the standard error for the distribution of the mean value in that well; “lower limit” plumes are calculated at a 5-percent confidence limit around the geometric mean. These upper- and lower-limit estimates are considered unrealistic because the statistics were increased or decreased at each well simultaneously and were not adjusted for correlation among the well distributions (i.e., it is not realistic that all wells would be high simultaneously). Sources of the variability in the distributions used in the upper- and lower-extent maps include time varying concentrations and analytical errors.The plume delineations developed in this study are similar to the previous plume descriptions developed by U.S. Department of Energy and its contractors. The differences are primarily due to data selection and interpolation methodology. The differences in delineated plumes are not sufficient to result in the Hanford Natural Resource Trustee Council adjusting its understandings of contaminant impact or remediation.
Wang, Yan; Li, Jingwen; Sun, Bing; Yang, Jian
2016-01-01
Azimuth resolution of airborne stripmap synthetic aperture radar (SAR) is restricted by the azimuth antenna size. Conventionally, a higher azimuth resolution should be achieved by employing alternate modes that steer the beam in azimuth to enlarge the synthetic antenna aperture. However, if a data set of a certain region, consisting of multiple tracks of airborne stripmap SAR data, is available, the azimuth resolution of specific small region of interest (ROI) can be conveniently improved by a novel azimuth super-resolution method as introduced by this paper. The proposed azimuth super-resolution method synthesize the azimuth bandwidth of the data selected from multiple discontinuous tracks and contributes to a magnifier-like function with which the ROI can be further zoomed in with a higher azimuth resolution than that of the original stripmap images. Detailed derivation of the azimuth super-resolution method, including the steps of two-dimensional dechirping, residual video phase (RVP) removal, data stitching and data correction, is provided. The restrictions of the proposed method are also discussed. Lastly, the presented approach is evaluated via both the single- and multi-target computer simulations. PMID:27304959
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
The Canadian Precipitation Analysis (CaPA): Evaluation of the statistical interpolation scheme
NASA Astrophysics Data System (ADS)
Evans, Andrea; Rasmussen, Peter; Fortin, Vincent
2013-04-01
CaPA (Canadian Precipitation Analysis) is a data assimilation system which employs statistical interpolation to combine observed precipitation with gridded precipitation fields produced by Environment Canada's Global Environmental Multiscale (GEM) climate model into a final gridded precipitation analysis. Precipitation is important in many fields and applications, including agricultural water management projects, flood control programs, and hydroelectric power generation planning. Precipitation is a key input to hydrological models, and there is a desire to have access to the best available information about precipitation in time and space. The principal goal of CaPA is to produce this type of information. In order to perform the necessary statistical interpolation, CaPA requires the estimation of a semi-variogram. This semi-variogram is used to describe the spatial correlations between precipitation innovations, defined as the observed precipitation amounts minus the GEM forecasted amounts predicted at the observation locations. Currently, CaPA uses a single isotropic variogram across the entire analysis domain. The present project investigates the implications of this choice by first conducting a basic variographic analysis of precipitation innovation data across the Canadian prairies, with specific interest in identifying and quantifying potential anisotropy within the domain. This focus is further expanded by identifying the effect of storm type on the variogram. The ultimate goal of the variographic analysis is to develop improved semi-variograms for CaPA that better capture the spatial complexities of precipitation over the Canadian prairies. CaPA presently applies a Box-Cox data transformation to both the observations and the GEM data, prior to the calculation of the innovations. The data transformation is necessary to satisfy the normal distribution assumption, but introduces a significant bias. The second part of the investigation aims at devising a bias correction scheme based on a moving-window averaging technique. For both the variogram and bias correction components of this investigation, a series of trial runs are conducted to evaluate the impact of these changes on the resulting CaPA precipitation analyses.
Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.
1995-04-01
This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.
Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan
2017-08-07
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
Imaging synthetic aperture radar
Burns, Bryan L.; Cordaro, J. Thomas
1997-01-01
A linear-FM SAR imaging radar method and apparatus to produce a real-time image by first arranging the returned signals into a plurality of subaperture arrays, the columns of each subaperture array having samples of dechirped baseband pulses, and further including a processing of each subaperture array to obtain coarse-resolution in azimuth, then fine-resolution in range, and lastly, to combine the processed subapertures to obtain the final fine-resolution in azimuth. Greater efficiency is achieved because both the transmitted signal and a local oscillator signal mixed with the returned signal can be varied on a pulse-to-pulse basis as a function of radar motion. Moreover, a novel circuit can adjust the sampling location and the A/D sample rate of the combined dechirped baseband signal which greatly reduces processing time and hardware. The processing steps include implementing a window function, stabilizing either a central reference point and/or all other points of a subaperture with respect to doppler frequency and/or range as a function of radar motion, sorting and compressing the signals using a standard fourier transforms. The stabilization of each processing part is accomplished with vector multiplication using waveforms generated as a function of radar motion wherein these waveforms may be synthesized in integrated circuits. Stabilization of range migration as a function of doppler frequency by simple vector multiplication is a particularly useful feature of the invention; as is stabilization of azimuth migration by correcting for spatially varying phase errors prior to the application of an autofocus process.
Marto, J A; White, F M; Seldomridge, S; Marshall, A G
1995-11-01
Matrix-assisted laser desorption/ionization (MALDI) Fourier transform ion cyclotron resonance mass spectrometry provides for structural analysis of the principal biological phospholipids: glycerophosphatidylcholine, -ethanolamine, -serine, and -inositol. Both positive and negative molecular or quasimolecular ions are generated in high abundance. Isolated molecular ions may be collisionally activated in the source side of a dual trap mass analyzer, yielding fragments serving to identify the polar head group (positive ion mode) and fatty acid side chains (negative ion mode). Azimuthal quadrupolar excitation following collisionally activated dissociation refocuses productions close to the solenoid axis; subsequent transfer of product ions to the analyzer ion trap allows for high-resolution mass analysis. Cyro-cooling of the sample probe with liquid nitrogen greatly reduces matrix adduction encountered in the negative ion mode.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Feasibility study on a strain based deflection monitoring system for wind turbine blades
NASA Astrophysics Data System (ADS)
Lee, Kyunghyun; Aihara, Aya; Puntsagdash, Ganbayar; Kawaguchi, Takayuki; Sakamoto, Hiraku; Okuma, Masaaki
2017-01-01
The bending stiffness of the wind turbine blades has decreased due to the trend of wind turbine upsizing. Consequently, the risk of blades breakage by hitting the tower has increased. In order to prevent such incidents, this study proposes a deflection monitoring system that can be installed to already operating wind turbine's blades. The monitoring system is composed of an estimation algorithm to detect blade deflection and a wireless sensor network as a hardware equipment. As for the estimation method for blade deflection, a strain-based estimation algorithm and an objective function for optimal sensor arrangement are proposed. Strain-based estimation algorithm is using a linear correlation between strain and deflections, which can be expressed in a form of a transformation matrix. The objective function includes the terms of strain sensitivity and condition number of the transformation matrix between strain and deflection. In order to calculate the objective function, a simplified experimental model of the blade is constructed by interpolating the mode shape of a blade from modal testing. The interpolation method is effective considering a practical use to operating wind turbines' blades since it is not necessary to establish a finite element model of a blade. On the other hand, a sensor network with wireless connection with an open source hardware is developed. It is installed to a 300 W scale wind turbine and vibration of the blade on operation is investigated.
NASA Astrophysics Data System (ADS)
Marqués, Diego; Nuñez, Carmen A.
2015-10-01
We construct an O( d, d) invariant universal formulation of the first-order α'-corrections of the string effective actions involving the dilaton, metric and two-form fields. Two free parameters interpolate between four-derivative terms that are even and odd with respect to a Z 2-parity transformation that changes the sign of the two-form field. The Z 2-symmetric model reproduces the closed bosonic string, and the heterotic string effective action is obtained through a Z 2-parity-breaking choice of parameters. The theory is an extension of the generalized frame formulation of Double Field Theory, in which the gauge transformations are deformed by a first-order generalized Green-Schwarz transformation. This deformation defines a duality covariant gauge principle that requires and fixes the four-derivative terms. We discuss the O( d, d) structure of the theory and the (non-)covariance of the required field redefinitions.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.
Quantum and anharmonic effects are investigated in (H{sub 2}){sub 2}–Li{sup +}–benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H{sub 2} molecule to the H{sub 2}–Li{sup +}–benzene complex increases the ZPE of the system by 5.6 kJ mol{sup −1} to 17.6 kJ mol{sup −1}. This ZPE is 42% of the total electronic binding energymore » of (H{sub 2}){sub 2}–Li{sup +}–benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H{sub 2} to H{sub 2}–Li{sup +}–benzene is 7.7 kJ mol{sup −1}, compared to 12.4 kJ mol{sup −1} for the first H{sub 2} molecule. Anharmonicity is found to be even more important when a second (and subsequent) H{sub 2} molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H{sub 2} molecules are found at larger distance from the Li{sup +} ion and are more confined in the θ coordinate than in H{sub 2}–Li{sup +}–benzene. They also show that both H{sub 2} molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H{sub 2} molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H{sub 2}){sub 2}–Li{sup +}–benzene PESs are developed. These use a modified Shepard interpolation for the Li{sup +}–benzene and H{sub 2}–Li{sup +}–benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H{sub 2}–H{sub 2} interaction. Because of the neglect of three-body H{sub 2}, H{sub 2}, Li{sup +} terms, both fragment PESs lead to overbinding of the second H{sub 2} molecule by 1.5 kJ mol{sup −1}. Probability density histograms, however, indicate that the wavefunctions for the two H{sub 2} molecules are effectively identical on the “full” and fragment PESs. This suggests that the 1.5 kJ mol{sup −1} error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H{sub 2}–H{sub 2} interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.« less
Press, William H.
2006-01-01
Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155
Press, William H
2006-12-19
Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.
Autonomous navigation system. [gyroscopic pendulum for air navigation
NASA Technical Reports Server (NTRS)
Merhav, S. J. (Inventor)
1981-01-01
An inertial navigation system utilizing a servo-controlled two degree of freedom pendulum to obtain specific force components in the locally level coordinate system is described. The pendulum includes a leveling gyroscope and an azimuth gyroscope supported on a two gimbal system. The specific force components in the locally level coordinate system are converted to components in the geographical coordinate system by means of a single Euler transformation. The standard navigation equations are solved to determine longitudinal and lateral velocities. Finally, vehicle position is determined by a further integration.
A new method for generating a hollow Gaussian beam
NASA Astrophysics Data System (ADS)
Wei, Cun; Lu, Xingyuan; Wu, Gaofeng; Wang, Fei; Cai, Yangjian
2014-04-01
Hollow Gaussian beam (HGB) was introduced 10 years ago (Cai et al. in Opt Lett 28:1084, 2003). In this paper, we introduce a new method for generating a HGB through transforming a Laguerre-Gaussian beam with radial index 0 and azimuthal index l into a HGB with mode n = l/2. Furthermore, we report experimental generation of a HGB based on the proposed method, and we carry out experimental study of the focusing properties of the generated HGB. Our experimental results agree well with the theoretical predictions.
Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems
NASA Astrophysics Data System (ADS)
Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao
2016-02-01
A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.
Fresh look at the Abelian and non-Abelian Landau-Khalatnikov-Fradkin transformations
NASA Astrophysics Data System (ADS)
De Meerleer, T.; Dudal, D.; Sorella, S. P.; Dall'Olio, P.; Bashir, A.
2018-04-01
The Landau-Khalatnikov-Fradkin transformations (LKFTs) allow one to interpolate n -point functions between different gauges. We first offer an alternative derivation of these LKFTs for the gauge and fermions field in the Abelian (QED) case when working in the class of linear covariant gauges. Our derivation is based on the introduction of a gauge invariant transversal gauge field, which allows a natural generalization to the non-Abelian (QCD) case of the LKFTs. To our knowledge, within this rigorous formalism, this is the first construction of the LKFTs beyond QED. The renormalizability of our setup is guaranteed to all orders. We also offer a direct path integral derivation in the non-Abelian case, finding full consistency.
Uncertainty relation for the discrete Fourier transform.
Massar, Serge; Spindel, Philippe
2008-05-16
We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Hagyard, M. J.
1990-01-01
Off-center vector magnetograms which use all three components of the measured field provide the maximum information content from the photospheric field and can provide the most consistent potential field independent of the viewing angle by defining the normal component of the field. The required transformations of the magnetic field vector and the geometric mapping of the observed field in the image plane into the heliographic plane have been described. Here we discuss the total transformation of specific vector magnetograms to detail the problems and procedures that one should be aware of in analyzing observational magnetograms. The effect of the 180-deg ambiguity of the observed transverse field is considered as well as the effect of curvature of the photosphere. Specific results for active regions AR 2684 (September 23, 1980) and AR 4474 (April 26, 1984) from the Marshall Space Flight Center Vector magnetograph are described which point to the need for the heliographic projection in determining the field structure of an active region.
Campbell, Joel F; Lin, Bing; Nehrir, Amin R; Harrison, F Wallace; Obland, Michael D
2014-12-15
An interpolation method is described for range measurements of high precision altimetry with repeating intensity modulated continuous wave (IM-CW) lidar waveforms using binary phase shift keying (BPSK), where the range profile is determined by means of a cross-correlation between the digital form of the transmitted signal and the digitized return signal collected by the lidar receiver. This method uses reordering of the array elements in the frequency domain to convert a repeating synthetic pulse signal to single highly interpolated pulse. This is then enhanced further using Richardson-Lucy deconvolution to greatly enhance the resolution of the pulse. We show the sampling resolution and pulse width can be enhanced by about two orders of magnitude using the signal processing algorithms presented, thus breaking the fundamental resolution limit for BPSK modulation of a particular bandwidth and bit rate. We demonstrate the usefulness of this technique for determining cloud and tree canopy thicknesses far beyond this fundamental limit in a lidar not designed for this purpose.
A modified dual-level algorithm for large-scale three-dimensional Laplace and Helmholtz equation
NASA Astrophysics Data System (ADS)
Li, Junpu; Chen, Wen; Fu, Zhuojia
2018-01-01
A modified dual-level algorithm is proposed in the article. By the help of the dual level structure, the fully-populated interpolation matrix on the fine level is transformed to a local supported sparse matrix to solve the highly ill-conditioning and excessive storage requirement resulting from fully-populated interpolation matrix. The kernel-independent fast multipole method is adopted to expediting the solving process of the linear equations on the coarse level. Numerical experiments up to 2-million fine-level nodes have successfully been achieved. It is noted that the proposed algorithm merely needs to place 2-3 coarse-level nodes in each wavelength per direction to obtain the reasonable solution, which almost down to the minimum requirement allowed by the Shannon's sampling theorem. In the real human head model example, it is observed that the proposed algorithm can simulate well computationally very challenging exterior high-frequency harmonic acoustic wave propagation up to 20,000 Hz.
Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)
NASA Astrophysics Data System (ADS)
Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo
2016-02-01
Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
Interpolation of Water Quality Along Stream Networks from Synoptic Data
NASA Astrophysics Data System (ADS)
Lyon, S. W.; Seibert, J.; Lembo, A. J.; Walter, M. T.; Gburek, W. J.; Thongs, D.; Schneiderman, E.; Steenhuis, T. S.
2005-12-01
Effective catchment management requires water quality monitoring that identifies major pollutant sources and transport and transformation processes. While traditional monitoring schemes involve regular sampling at fixed locations in the stream, there is an interest synoptic or `snapshot' sampling to quantify water quality throughout a catchment. This type of sampling enables insights to biogeochemical behavior throughout a stream network at low flow conditions. Since baseflow concentrations are temporally persistence, they are indicative of the health of the ecosystems. A major problem with snapshot sampling is the lack of analytical techniques to represent the spatially distributed data in a manner that is 1) easily understood, 2) representative of the stream network, and 3) capable of being used to develop land management scenarios. This study presents a kriging application using the landscape composition of the contributing area along a stream network to define a new distance metric. This allows for locations that are more `similar' to stay spatially close together while less similar locations `move' further apart. We analyze a snapshot sampling campaign consisting of 125 manually collected grab samples during a summer recession flow period in the Townbrook Research Watershed. The watershed is located in the Catskill region of New York State and represents the mixed forest-agriculture land uses of the region. Our initial analysis indicated that stream nutrients (nitrogen and phosphorus) and chemical (major cations and anions) concentrations are controlled by the composition of landscape characteristics (landuse classes and soil types) surrounding the stream. Based on these relationships, an intuitively defined distance metric is developed by combining the traditional distance between observations and the relative difference in composition of contributing area. This metric is used to interpolate between the sampling locations with traditional geostatistic techniques (semivariograms and ordinary kriging). The resulting interpolations provide continuous stream nutrient and chemical concentrations with reduced kriging RMSE (i.e., the interpolation fits the actual data better) performed without path restriction to the stream channel (i.e., the current default for most geostatistical packages) or performed with an in-channel, Euclidean distance metric (i.e., `as the fish swims' distance). In addition to being quantifiably better, the new metric also produces maps of stream concentrations that match expected continuous stream concentrations based on expert knowledge of the watershed. This analysis and its resulting stream concentration maps provide a representation of spatially distributed synoptic data that can be used to quantify water quality for more effective catchment management that focuses on pollutant sources and transport and transformation processes.
Characterizing bars in low surface brightness disc galaxies
NASA Astrophysics Data System (ADS)
Peters, Wesley; Kuzio de Naray, Rachel
2018-05-01
In this paper, we use B-band, I-band, and 3.6 μm azimuthal light profiles of four low surface brightness galaxies (LSBs; UGC 628, F568-1, F568-3, F563-V2) to characterize three bar parameters: length, strength, and corotation radius. We employ three techniques to measure the radius of the bars, including a new method using the azimuthal light profiles. We find comparable bar radii between the I-band and 3.6 μm for all four galaxies when using our azimuthal light profile method, and that our bar lengths are comparable to those in high surface brightness galaxies (HSBs). In addition, we find the bar strengths for our galaxies to be smaller than those for HSBs. Finally, we use Fourier transforms of the B-band, I-band, and 3.6 μm images to characterize the bars as either `fast' or `slow' by measuring the corotation radius via phase profiles. When using the B- and I-band phase crossings, we find three of our galaxies have faster than expected relative bar pattern speeds for galaxies expected to be embedded in centrally dense cold dark matter haloes. When using the B-band and 3.6 μm phase crossings, we find more ambiguous results, although the relative bar pattern speeds are still faster than expected. Since we find a very slow bar in F563-V2, we are confident that we are able to differentiate between fast and slow bars. Finally, we find no relation between bar strength and relative bar pattern speed when comparing our LSBs to HSBs.
Spatiotemporal Interpolation Methods for Solar Event Trajectories
NASA Astrophysics Data System (ADS)
Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe
2018-05-01
This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.
A bivariate rational interpolation with a bi-quadratic denominator
NASA Astrophysics Data System (ADS)
Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu
2006-10-01
In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shippert, Tim; Gaustad, Krista
Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
Coherent field propagation between tilted planes.
Stock, Johannes; Worku, Norman Girma; Gross, Herbert
2017-10-01
Propagating electromagnetic light fields between nonparallel planes is of special importance, e.g., within the design of novel computer-generated holograms or the simulation of optical systems. In contrast to the extensively discussed evaluation between parallel planes, the diffraction-based propagation of light onto a tilted plane is more burdensome, since discrete fast Fourier transforms cannot be applied directly. In this work, we propose a quasi-fast algorithm (O(N 3 log N)) that deals with this problem. Based on a proper decomposition into three rotations, the vectorial field distribution is calculated on a tilted plane using the spectrum of plane waves. The algorithm works on equidistant grids, so neither nonuniform Fourier transforms nor an explicit complex interpolation is necessary. The proposed algorithm is discussed in detail and applied to several examples of practical interest.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Attitude-error compensation for airborne down-looking synthetic-aperture imaging lidar
NASA Astrophysics Data System (ADS)
Li, Guang-yuan; Sun, Jian-feng; Zhou, Yu; Lu, Zhi-yong; Zhang, Guo; Cai, Guang-yu; Liu, Li-ren
2017-11-01
Target-coordinate transformation in the lidar spot of the down-looking synthetic-aperture imaging lidar (SAIL) was performed, and the attitude errors were deduced in the process of imaging, according to the principle of the airborne down-looking SAIL. The influence of the attitude errors on the imaging quality was analyzed theoretically. A compensation method for the attitude errors was proposed and theoretically verified. An airborne down-looking SAIL experiment was performed and yielded the same results. A point-by-point error-compensation method for solving the azimuthal-direction space-dependent attitude errors was also proposed.
NASA Technical Reports Server (NTRS)
Rummel, R.; Sjoeberg, L.; Rapp, R. H.
1978-01-01
A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.
NASA Astrophysics Data System (ADS)
Tapimo, Romuald; Tagne Kamdem, Hervé Thierry; Yemele, David
2018-03-01
A discrete spherical harmonics method is developed for the radiative transfer problem in inhomogeneous polarized planar atmosphere illuminated at the top by a collimated sunlight while the bottom reflects the radiation. The method expands both the Stokes vector and the phase matrix in a finite series of generalized spherical functions and the resulting vector radiative transfer equation is expressed in a set of polar directions. Hence, the polarized characteristics of the radiance within the atmosphere at any polar direction and azimuthal angle can be determined without linearization and/or interpolations. The spatial dependent of the problem is solved using the spectral Chebyshev method. The emergent and transmitted radiative intensity and the degree of polarization are predicted for both Rayleigh and Mie scattering. The discrete spherical harmonics method predictions for optical thin atmosphere using 36 streams are found in good agreement with benchmark literature results. The maximum deviation between the proposed method and literature results and for polar directions \\vert μ \\vert ≥0.1 is less than 0.5% and 0.9% for the Rayleigh and Mie scattering, respectively. These deviations for directions close to zero are about 3% and 10% for Rayleigh and Mie scattering, respectively.
NASA Astrophysics Data System (ADS)
Micheletty, P. D.; Day, G. N.; Quebbeman, J.; Carney, S.; Park, G. H.
2016-12-01
The Upper Colorado River Basin above Lake Powell is a major source of water supply for 25 million people and provides irrigation water for 3.5 million acres. Approximately 85% of the annual runoff is produced from snowmelt. Water supply forecasts of the April-July runoff produced by the National Weather Service (NWS) Colorado Basin River Forecast Center (CBRFC), are critical to basin water management. This project leverages advanced distributed models, datasets, and snow data assimilation techniques to improve operational water supply forecasts made by CBRFC in the Upper Colorado River Basin. The current work will specifically focus on improving water supply forecasts through the implementation of a snow data assimilation process coupled with the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM). Three types of observations will be used in the snow data assimilation system: satellite Snow Covered Area (MODSCAG), satellite Dust Radiative Forcing in Snow (MODDRFS), and SNOTEL Snow Water Equivalent (SWE). SNOTEL SWE provides the main source of high elevation snowpack information during the snow season, however, these point measurement sites are carefully selected to provide consistent indices of snowpack, and may not be representative of the surrounding watershed. We address this problem by transforming the SWE observations to standardized deviates and interpolating the standardized deviates using a spatial regression model. The interpolation process will also take advantage of the MODIS Snow Covered Area and Grainsize (MODSCAG) product to inform the model on the spatial distribution of snow. The interpolated standardized deviates are back-transformed and used in an Ensemble Kalman Filter (EnKF) to update the model simulated SWE. The MODIS Dust Radiative Forcing in Snow (MODDRFS) product will be used more directly through temporary adjustments to model snowmelt parameters, which should improve melt estimates in areas affected by dust on snow. In order to assess the value of different data sources, reforecasts will be produced for a historical period and performance measures will be computed to assess forecast skill. The existing CBRFC Ensemble Streamflow Prediction (ESP) reforecasts will provide a baseline for comparison to determine the added-value of the data assimilation process.
Seismic data interpolation and denoising by learning a tensor tight frame
NASA Astrophysics Data System (ADS)
Liu, Lina; Plonka, Gerlind; Ma, Jianwei
2017-10-01
Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.
TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE
NASA Technical Reports Server (NTRS)
Vu, B. T.
1994-01-01
TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.
Azimuthal phase retardation microscope for visualizing actin filaments of biological cells
NASA Astrophysics Data System (ADS)
Shin, In Hee; Shin, Sang-Mo
2011-09-01
We developed a new theory-based azimuthal phase retardation microscope to visualize distributions of actin filaments in biological cells without having them with exogenous dyes, fluorescence labels, or stains. The azimuthal phase retardation microscope visualizes distributions of actin filaments by measuring the intensity variations of each pixel of a charge coupled device camera while rotating a single linear polarizer. Azimuthal phase retardation δ between two fixed principal axes was obtained by calculating the rotation angles of the polarizer at the intensity minima from the acquired intensity data. We have acquired azimuthal phase retardation distributions of human breast cancer cell, MDA MB 231 by our microscope and compared the azimuthal phase retardation distributions with the fluorescence image of actin filaments by the commercial fluorescence microscope. Also, we have observed movement of human umbilical cord blood derived mesenchymal stem cells by measuring azimuthal phase retardation distributions.
How shear increments affect the flow production branching ratio in CSDX
NASA Astrophysics Data System (ADS)
Li, J. C.; Diamond, P. H.
2018-06-01
The coupling of turbulence-driven azimuthal and axial flows in a linear device absent magnetic shear (Controlled Shear Decorrelation Experiment) is investigated. In particular, we examine the apportionment of Reynolds power between azimuthal and axial flows, and how the azimuthal flow shear affects axial flow generation and saturation by drift wave turbulence. We study the response of the energy branching ratio, i.e., ratio of axial and azimuthal Reynolds powers, PzR/PyR , to incremental changes of azimuthal and axial flow shears. We show that increasing azimuthal flow shear decreases the energy branching ratio. When axial flow shear increases, this ratio first increases but then decreases to zero. The axial flow shear saturates below the threshold for parallel shear flow instability. The effects of azimuthal flow shear on the generation and saturation of intrinsic axial flows are analyzed. Azimuthal flow shear slows down the modulational growth of the seed axial flow shear, and thus reduces intrinsic axial flow production. Azimuthal flow shear reduces both the residual Reynolds stress (of axial flow, i.e., ΠxzR e s ) and turbulent viscosity ( χzDW ) by the same factor |⟨vy⟩'|-2Δx-2Ln-2ρs2cs2 , where Δx is the distance relative to the reference point where ⟨vy⟩=0 in the plasma frame. Therefore, the stationary state axial flow shear is not affected by azimuthal flow shear to leading order since ⟨vz⟩'˜ΠxzR e s/χzDW .
NASA Astrophysics Data System (ADS)
Matar, Thiombane; Vivo Benedetto, De; Albanese, Stefano; Martín-Fernández, Josep-Antoni; Lima, Annamaria; Doherty, Angela
2017-04-01
The Sarno River Basin (south-west Italy), nestled between the Somma-Vesuvius volcanic complex and the limestone formations of the Campania-Apennine Chain, is one of the most polluted river basins in Europe due to a high rate of industrialization and intensive agriculture. Water from the Sarno River, which is heavily contaminated by the discharge of human and industrial waste, is partially used for irrigation on the agricultural fields surrounding it. We apply compositional data analysis on 319 samples collected during two field campaigns along the river course, and throughout the basin, to determine the level and potential origin (anthropogenic and/or geogenic) of the potentially toxic elements (PTEs). The concentrations of 53 elements determined by ICP-MS, and were subsequently log-transformed. Using a clr-biplot and principal factor analysis, the variability and the correlations between a subset of extracted variables (26 elements) were identified. Using both normalized raw data and clr-transformed coordinates, factor association interpolated maps were generated to better visualize the distribution and potential sources of the PTEs in the Sarno Basin. The underlying geology substrata appear to be associated with raised of levels of Na, K, P, Rb, Ba, V, Co, B, Zr, and Li, due to the presence of pyroclastic rocks from Mt. Somma-Vesuvius. Similarly, elevated Pb, Zn, Cd, and Hg concentrations are most likely related to both geological and anthropogenic sources, the underlying volcanic rocks and contamination from fossil fuel combustion associated with urban centers. Interpolated factors score maps and clr-biplot indicate a clear correlation between Ni and Cr in samples taken along the Sarno River, and Ca and Mg near the Solofra district. After considering nearby anthropogenic sources, the Ni and Cr are PTEs from the Solofra tannery industry, while Ca and Mg correlate to the underlying limestone-rich soils of the area. This study shows the applicability of the compositional data analysis transformations, which relates perfectly relationships and dependencies between elements which can be lost when univariate and classical multivariate analyses are employed on normal data. Keywords: Sarno basin, PTEs, compositional data analysis, centered-log Transformation (clr), Biplot, Factor analysis, ArcGIS
Shao, Xueguang; Yu, Zhengliang; Ma, Chaoxiong
2004-06-01
An improved method is proposed for the quantitative determination of multicomponent overlapping chromatograms based on a known transmutation method. To overcome the main limitation of the transmutation method caused by the oscillation generated in the transmutation process, two techniques--wavelet transform smoothing and the cubic spline interpolation for reducing data points--were adopted, and a new criterion was also developed. By using the proposed algorithm, the oscillation can be suppressed effectively, and quantitative determination of the components in both the simulated and experimental overlapping chromatograms is successfully obtained.
Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph
2014-04-01
Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.
Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions
NASA Technical Reports Server (NTRS)
Mcguire, Stephen C.
1987-01-01
The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Fan beam image reconstruction with generalized Fourier slice theorem.
Zhao, Shuangren; Yang, Kang; Yang, Kevin
2014-01-01
For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N
NASA Astrophysics Data System (ADS)
Han, Zhaohui; Friesen, Scott; Hacker, Fred; Zygmanski, Piotr
2018-01-01
Direct use of the total scatter factor (S tot) for independent monitor unit (MU) calculations can be a good alternative approach to the traditional separate treatment of head/collimator scatter (S c) and phantom scatter (S p), especially for stereotactic small fields under the simultaneous collimation of secondary jaws and tertiary multileaf collimators (MLC). We have carried out the measurement of S tot in water for field sizes down to 0.5 × 0.5 cm2 on a Varian TrueBeam STx medical linear accelerator (linac) equipped with high definition MLCs. Both the jaw field size (c) and MLC field size (s) significantly impact the linac output factors, especially when c \\gg s and s is small (e.g. s < 5 cm). The combined influence of MLC and jaws gives rise to a two-argument dependence of the total scatter factor, S tot(c,s), which is difficult to functionally decouple. The (c,s) dependence can be conceived as a set of s-dependent functions (‘branches’) defined on domain [s min, s max = c] for a given jaw size of c. We have also developed a heuristic model of S tot to assist the clinical implementation of the measured S tot data for small field dosimetry. The model has two components: (i) empirical fit formula for the s-dependent branches and (ii) interpolation scheme between the branches. The interpolation scheme preserves the characteristic shape of the measured branches and effectively transforms the measured trapezoidal domain in (c,s) plane to a rectangular domain to facilitate easier two-dimensional interpolation to determine S tot for arbitrary (c,s) combinations. Both the empirical fit and interpolation showed good agreement with experimental validation data.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Meng-Zheng; School of Physics and Electronic Information, Huaibei Normal University, Huaibei 235000; Ye, Liu, E-mail: yeliu@ahu.edu.cn
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC)more » transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.« less
Deep Wavelet Scattering for Quantum Energy Regression
NASA Astrophysics Data System (ADS)
Hirn, Matthew
Physical functionals are usually computed as solutions of variational problems or from solutions of partial differential equations, which may require huge computations for complex systems. Quantum chemistry calculations of ground state molecular energies is such an example. Indeed, if x is a quantum molecular state, then the ground state energy E0 (x) is the minimum eigenvalue solution of the time independent Schrödinger Equation, which is computationally intensive for large systems. Machine learning algorithms do not simulate the physical system but estimate solutions by interpolating values provided by a training set of known examples {(xi ,E0 (xi) } i <= n . However, precise interpolations may require a number of examples that is exponential in the system dimension, and are thus intractable. This curse of dimensionality may be circumvented by computing interpolations in smaller approximation spaces, which take advantage of physical invariants. Linear regressions of E0 over a dictionary Φ ={ϕk } k compute an approximation E 0 as: E 0 (x) =∑kwkϕk (x) , where the weights {wk } k are selected to minimize the error between E0 and E 0 on the training set. The key to such a regression approach then lies in the design of the dictionary Φ. It must be intricate enough to capture the essential variability of E0 (x) over the molecular states x of interest, while simple enough so that evaluation of Φ (x) is significantly less intensive than a direct quantum mechanical computation (or approximation) of E0 (x) . In this talk we present a novel dictionary Φ for the regression of quantum mechanical energies based on the scattering transform of an intermediate, approximate electron density representation ρx of the state x. The scattering transform has the architecture of a deep convolutional network, composed of an alternating sequence of linear filters and nonlinear maps. Whereas in many deep learning tasks the linear filters are learned from the training data, here the physical properties of E0 (invariance to isometric transformations of the state x, stable to deformations of x) are leveraged to design a collection of linear filters ρx *ψλ for an appropriate wavelet ψ. These linear filters are composed with the nonlinear modulus operator, and the process is iterated upon so that at each layer stable, invariant features are extracted: ϕk (x) = ∥ | | ρx *ψλ1 | * ψλ2 | * ... *ψλm ∥ , k = (λ1 , ... ,λm) , m = 1 , 2 , ... The scattering transform thus encodes not only interactions at multiple scales (in the first layer, m = 1), but also features that encode complex phenomena resulting from a cascade of interactions across scales (in subsequent layers, m >= 2). Numerical experiments give state of the art accuracy over data bases of organic molecules, while theoretical results guarantee performance for the component of the ground state energy resulting from Coulombic interactions. Supported by the ERC InvariantClass 320959 Grant.
Poster - Thur Eve - 77: Coordinate transformation from DICOM to DOSXYZnrc.
Zhan, L; Jiang, R; Osei, E K
2012-07-01
DICOM format is the de facto standard for communications between therapeutic and diagnostic modalities. A plan generated by a treatment planning system (TPS) is often exported to DICOM format. BEAMnrc/DOSXYZnrc is a widely used Monte Carlo (MC) package for beam and dose simulations in radiotherapy. It has its own definition for beam orientation, which is not in compliance with the one defined in DICOM standard. Dose simulations using TPS generated plans require transformation of beam orientations to DOSXYZnrc coordinate system (c.s.) after extracting the necessary parameters from DICOM RP files. The transformation is nontrivial. There have been two studies for the coordinate transformations. The transformation equation sets derived have been helpful to BEAMnrc/DOSXYZnrc users. However, both the transformation equation sets are complex mathematically and not easy to program. In this study, we derive a new set of transformation equations, which are more compact, better understandable, and easier for computational implementation. The derivation of polar angle θ and azimuthal angle φ is similar to the existing studies by applying a series of rotations to a vector in DICOM patient c.s. The derivation of beam rotation Φ col for DOSXYZnrc, however, is different. It is obtained by a direct combination of the actual collimator rotation with the projection of the couch rotation to the collimator rotating plane. Verification of the transformation has been performed using clinical plans created with Eclipse. The comparison between Eclipse and MC results show exact geometrical agreement for field placements, together with good agreement in dose distributions. © 2012 American Association of Physicists in Medicine.
Beam coordinate transformations from DICOM to DOSXYZnrc
NASA Astrophysics Data System (ADS)
Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.
2012-12-01
Digital imaging and communications in medicine (DICOM) format is the de facto standard for communications between therapeutic and diagnostic modalities. A plan generated by a treatment planning system (TPS) is often exported in DICOM format. BEAMnrc/DOSXYZnrc is a widely used Monte Carlo (MC) package for modelling the Linac head and simulating dose delivery in radiotherapy. It has its own definition of beam orientation, which is not in compliance with the one defined in the DICOM standard. MC dose calculations using information from TPS generated plans require transformation of beam orientations to the DOSXYZnrc coordinate system (c.s.) and the transformation is non-trivial. There have been two studies on the coordinate transformations. The transformation equation sets derived have been helpful to BEAMnrc/DOSXYZnrc users. However, the transformation equation sets are complex mathematically and not easy to program. In this study, we derive a new set of transformation equations, which are more compact, easily understandable, and easier for computational implementation. The derivation of the polar angle θ and the azimuthal angle φ used by DOSXYZnrc is similar to the existing studies by applying a series of rotations to a vector in DICOM patient c.s. The derivation of the beam rotation ϕcol for DOSXYZnrc, however, is different. It is obtained by a direct combination of the actual collimator rotation with the projection of the couch rotation to the collimator rotating plane. Verification of the transformation has been performed using clinical plans. The comparisons between TPS and MC results show very good geometrical agreement for field placements, together with good agreement in dose distributions.
Visualizing 3-D microscopic specimens
NASA Astrophysics Data System (ADS)
Forsgren, Per-Ola; Majlof, Lars L.
1992-06-01
The confocal microscope can be used in a vast number of fields and applications to gather more information than is possible with a regular light microscope, in particular about depth. Compared to other three-dimensional imaging devices such as CAT, NMR, and PET, the variations of the objects studied are larger and not known from macroscopic dissections. It is therefore important to have several complementary ways of displaying the gathered information. We present a system where the user can choose display techniques such as extended focus, depth coding, solid surface modeling, maximum intensity and other techniques, some of which may be combined. A graphical user interface provides easy and direct control of all input parameters. Motion and stereo are available options. Many three- dimensional imaging devices give recordings where one dimension has different resolution and sampling than the other two which requires interpolation to obtain correct geometry. We have evaluated algorithms with interpolation in object space and in projection space. There are many ways to simplify the geometrical transformations to gain performance. We present results of some ways to simplify the calculations.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
High order Nyström method for elastodynamic scattering
NASA Astrophysics Data System (ADS)
Chen, Kun; Gurrala, Praveen; Song, Jiming; Roberts, Ron
2016-02-01
Elastic waves in solids find important applications in ultrasonic non-destructive evaluation. The scattering of elastic waves has been treated using many approaches like the finite element method, boundary element method and Kirchhoff approximation. In this work, we propose a novel accurate and efficient high order Nyström method to solve the boundary integral equations for elastodynamic scattering problems. This approach employs high order geometry description for the element, and high order interpolation for fields inside each element. Compared with the boundary element method, this approach makes the choice of the nodes for interpolation based on the Gaussian quadrature, which renders matrix elements for far field interaction free from integration, and also greatly simplifies the process for singularity and near singularity treatment. The proposed approach employs a novel efficient near singularity treatment that makes the solver able to handle extreme geometries like very thin penny-shaped crack. Numerical results are presented to validate the approach. By using the frequency domain response and performing the inverse Fourier transform, we also report the time domain response of flaw scattering.
Exploring a new S U (4 ) symmetry of meson interpolators
NASA Astrophysics Data System (ADS)
Glozman, L. Ya.; Pak, M.
2015-07-01
In recent lattice calculations it has been discovered that mesons upon truncation of the quasizero modes of the Dirac operator obey a symmetry larger than the S U (2 )L×S U (2 )R×U (1 )A symmetry of the QCD Lagrangian. This symmetry has been suggested to be S U (4 )⊃S U (2 )L×S U (2 )R×U (1 )A that mixes not only the u- and d-quarks of a given chirality, but also the left- and right-handed components. Here it is demonstrated that bilinear q ¯q interpolating fields of a given spin J ≥1 transform into each other according to irreducible representations of S U (4 ) or, in general, S U (2 NF). This fact together with the coincidence of the correlation functions establishes S U (4 ) as a symmetry of the J ≥1 mesons upon quasizero mode reduction. It is shown that this symmetry is a symmetry of the confining instantaneous charge-charge interaction in QCD. Different subgroups of S U (4 ) as well as the S U (4 ) algebra are explored.
NASA Astrophysics Data System (ADS)
Abd El-Wahed, Ahmed G.; Anan, Tarek I.
2016-12-01
A detailed structure and sedimentology interpretation was performed for the South Mansoura-1 well. The Formation Micro Imager (FMI) is recorded and interpreted over the interval 9100-8009 ft. This interval belongs to Sidi Salem and Qawasim Formations. Based on azimuth trend of manually picked dips (bed boundaries), the interval can be divided into 4 structural dip zones (Zone 1 (9100-8800 ft), variable azimuth direction with the major trends mainly to SW&NE; Zone 2 (8800-8570 ft), bedding dip azimuth is mainly to the NW; Zone 3 (8570-8250 ft), bedding dip azimuth is mainly to the NE; and Zone 4 (8250-8009 ft), bedding dip azimuth is mainly to the NW). Lamination identified over the interval shows a dominant dip azimuth trend toward North North-West direction. The interbedded shale units are highly laminated and show little evidence of bioturbation. Sand exhibits abundant cross bedding showing a dominant dip azimuth trends toward NNE and NE and more locally to the E. Sixteen truncations identified over the interval show variable azimuth trend with the major trend mainly to the North North-West.
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shippert, Tim; Gaustad, Krista
In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
NASA Technical Reports Server (NTRS)
Ambrosia, Vincent G.; Myers, Jeffrey S.; Ekstrand, Robert E.; Fitzgerald, Michael T.
1991-01-01
A simple method for enhancing the spatial and spectral resolution of disparate data sets is presented. Two data sets, digitized aerial photography at a nominal spatial resolution 3,7 meters and TMS digital data at 24.6 meters, were coregistered through a bilinear interpolation to solve the problem of blocky pixel groups resulting from rectification expansion. The two data sets were then subjected to intensity-saturation-hue (ISH) transformations in order to 'blend' the high-spatial-resolution (3.7 m) digitized RC-10 photography with the high spectral (12-bands) and lower spatial (24.6 m) resolution TMS digital data. The resultant merged products make it possible to perform large-scale mapping, ease photointerpretation, and can be derived for any of the 12 available TMS spectral bands.
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
Shippert, Tim; Gaustad, Krista
2016-12-16
In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
NASA Astrophysics Data System (ADS)
Pompe, L.; Clausen, B. L.; Morton, D. M.
2014-12-01
The Cretaceous northern Peninsular Ranges batholith (PRB) exemplifies emplacement in a combination oceanic arc / continental margin arc setting. Two approaches that can aid in understanding its statistical and spatial geochemistry variation are principle component analysis (PCA) and GIS interpolation mapping. The data analysis primarily used 287 samples from the large granitoid geochemical data set systematically collected by Baird and Welday. Of these, 80 points fell in the western Santa Ana block, 108 in the transitional Perris block, and 99 in the eastern San Jacinto block. In the statistical analysis, multivariate outliers were identified using Mahalanobis distance and excluded. A centered log ratio transformation was used to facilitate working with geochemical concentration values that range over many orders of magnitude. The data was then analyzed using PCA with IBM SPSS 21 reducing 40 geochemical variables to 4 components which are approximately related to the compatible, HFS, HRE, and LIL elements. The 4 components were interpreted as follows: (1) compatible [and negatively correlated incompatible] elements indicate extent of differentiation as typified by SiO2, (2) HFS elements indicate crustal contamination as typified by Sri and Nb/Yb ratios, (3) HRE elements indicate source depth as typified by Sr/Y and Gd/Yb ratios, and (4) LIL elements indicate alkalinity as typified by the K2O/SiO2ratio. Spatial interpolation maps of the 4 components were created with Esri ArcGIS for Desktop 10.2 by interpolating between the sample points using kriging and inverse distance weighting. Across-arc trends on the interpolation maps indicate a general increase from west to east for each of the 4 components, but with local exceptions as follows. The 15km offset on the San Jacinto Fault may be affecting the contours. South of San Jacinto is a west-east band of low Nb/Yb, Gd/Yb, and Sr/Y ratios. The highest Sr/Y ratios in the north central area that decrease further east may be due to the far eastern granitoids being transported above a shear zone. Along the western edge of the PRB, high SiO2 and K2O/SiO2 are interpreted to result from sampling shallow levels in the batholith (2-3 kb), as compared to deeper levels in the central (5-6 kb) and eastern (4.5 kb) areas.
Dinehart, R.L.; Burau, J.R.
2005-01-01
A strategy of repeated surveys by acoustic Doppler current profiler (ADCP) was applied in a tidal river to map velocity vectors and suspended-sediment indicators. The Sacramento River at the junction with the Delta Cross Channel at Walnut Grove, California, was surveyed over several tidal cycles in the Fall of 2000 and 2001 with a vessel-mounted ADCP. Velocity profiles were recorded along flow-defining survey paths, with surveys repeated every 27 min through a diurnal tidal cycle. Velocity vectors along each survey path were interpolated to a three-dimensional Cartesian grid that conformed to local bathymetry. A separate array of vectors was interpolated onto a grid from each survey. By displaying interpolated vector grids sequentially with computer animation, flow dynamics of the reach could be studied in three-dimensions as flow responded to the tidal cycle. Velocity streamtraces in the grid showed the upwelling of flow from the bottom of the Sacramento River channel into the Delta Cross Channel. The sequential display of vector grids showed that water in the canal briefly returned into the Sacramento River after peak flood tides, which had not been known previously. In addition to velocity vectors, ADCP data were processed to derive channel bathymetry and a spatial indicator for suspended-sediment concentration. Individual beam distances to bed, recorded by the ADCP, were transformed to yield bathymetry accurate enough to resolve small bedforms within the study reach. While recording velocity, ADCPs also record the intensity of acoustic backscatter from particles suspended in the flow. Sequential surveys of backscatter intensity were interpolated to grids and animated to indicate the spatial movement of suspended sediment through the study reach. Calculation of backscatter flux through cross-sectional grids provided a first step for computation of suspended-sediment discharge, the second step being a calibrated relation between backscatter intensity and sediment concentration. Spatial analyses of ADCP data showed that a strategy of repeated surveys and flow-field interpolation has the potential to simplify computation of flow and sediment discharge through complex waterways. The use of trade, product, industry, or firm names in this report is for descriptive purposes only and does not constitute endorsement of products by the US Government. ?? 2005 Elsevier B.V. All rights reserved.
Stayton, C Tristan
2009-05-01
Finite element (FE) models are popular tools that allow biologists to analyze the biomechanical behavior of complex anatomical structures. However, the expense and time required to create models from specimens has prevented comparative studies from involving large numbers of species. A new method is presented for transforming existing FE models using geometric morphometric methods. Homologous landmark coordinates are digitized on the FE model and on a target specimen into which the FE model is being transformed. These coordinates are used to create a thin-plate spline function and coefficients, which are then applied to every node in the FE model. This function smoothly interpolates the location of points between landmarks, transforming the geometry of the original model to match the target. This new FE model is then used as input in FE analyses. This procedure is demonstrated with turtle shells: a Glyptemys muhlenbergii model is transformed into Clemmys guttata and Actinemys marmorata models. Models are loaded and the resulting stresses are compared. The validity of the models is tested by crushing actual turtle shells in a materials testing machine and comparing those results to predictions from FE models. General guidelines, cautions, and possibilities for this procedure are also presented.
EOS Interpolation and Thermodynamic Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammel, J. Tinka
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Effect of interpolation on parameters extracted from seating interface pressure arrays.
Wininger, Michael; Crane, Barbara
2014-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.
Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan
2013-10-01
To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.
Theoretical modelling of residual and transformational stresses in SMA composites
NASA Astrophysics Data System (ADS)
Berman, J. B.; White, S. R.
1996-12-01
SMA composites are a class of smart materials in which shape memory alloy (SMA) actuators are embedded in a polymer matrix composite. The difference in thermal expansion between the SMA and the host material leads to residual stresses during processing. Similarly, the SMA transformations from martensite to austenite, or the reverse, also generate stresses. These stresses acting in combination can lead to SMA/epoxy interfacial debonding or microcracking of the composite phase. In this study the residual and transformational stresses are investigated for a nitinol wire embedded in a graphite/epoxy composite. A three-phase micromechanical model is developed. The nitinol wire is assumed to behave as a thermoelastic material. Nitinol austenitic and martensitic transformations are modelled using linear piecewise interpolation of experimental data. The interphase is modelled as a thermoelastic polymer. A transversely isotropic thermoelastic composite is used for the outer phase. Stress-free conditions are assumed immediately before cool down from the cure temperature. The effect of nitinol, coating and composite properties on residual and transformational stresses are evaluated. Fiber architectures favoring the axial direction decrease the magnitude of all residual stresses. A decrease in stresses at the composite/coating interface is also predicted through the use of thick, compliant coatings. Reducing the recovery strain and moving the transformation to higher temperatures were found to be most effective in reducing residual stresses.
Azimuthal anisotropy of the Pacific region
NASA Astrophysics Data System (ADS)
Maggi, Alessia; Debayle, Eric; Priestley, Keith; Barruol, Guilhem
2006-10-01
Azimuthal anisotropy is the dependence of local seismic properties on the azimuth of propagation. We present the azimuthally anisotropic component of a 3D SV velocity model for the Pacific Ocean, derived from the waveform modeling of over 56,000 multi-mode Rayleigh waves followed by a simultaneous inversion for isotropic and azimuthally anisotropic vsv structure. The isotropic vsv model is discussed in a previous paper (A. Maggi, E. Debayle, K. Priestley, G. Barruol, Multi-mode surface waveform tomography of the Pacific Ocean: a close look at the lithospheric cooling signature, Geophys. J. Int. 166 (3) (2006). doi:10.1111/j.1365-246x.2006.03037.x). The azimuthal anisotropy we find is consistent with the lattice preferred orientation model (LPO): the hypothesis of anisotropy generation in the Earth's mantle by preferential alignment of anisotropic crystals in response to the shear strains induced by mantle flow. At lithospheric depths we find good agreement between fast azimuthal anisotropy orientations and ridge spreading directions recorded by sea-floor magnetic anomalies. At asthenospheric depths we find a strong correlation between fast azimuthal anisotropy orientations and the directions of current plate motions. We observe perturbations in the pattern of seismic anisotropy close to Pacific hot-spots that are consistent with the predictions of numerical models of LPO generation in plume-disturbed plate motion-driven mantle flow. These observations suggest that perturbations in the patterns of azimuthal anisotropy may provide indirect evidence for plume-like upwelling in the mantle.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
Time reversal for localization of sources of infrasound signals in a windy stratified atmosphere.
Lonzaga, Joel B
2016-06-01
Time reversal is used for localizing sources of recorded infrasound signals propagating in a windy, stratified atmosphere. Due to the convective effect of the background flow, the back-azimuths of the recorded signals can be substantially different from the source back-azimuth, posing a significant difficulty in source localization. The back-propagated signals are characterized by negative group velocities from which the source back-azimuth and source-to-receiver (STR) distance can be estimated using the apparent back-azimuths and trace velocities of the signals. The method is applied to several distinct infrasound arrivals recorded by two arrays in the Netherlands. The infrasound signals were generated by the Buncefield oil depot explosion in the U.K. in December 2005. Analyses show that the method can be used to substantially enhance estimates of the source back-azimuth and the STR distance. In one of the arrays, for instance, the deviations between the measured back-azimuths of the signals and the known source back-azimuth are quite large (-1° to -7°), whereas the deviations between the predicted and known source back-azimuths are small with an absolute mean value of <1°. Furthermore, the predicted STR distance is off only by <5% of the known STR distance.
Study on an azimuthal line cusp ion source for the KSTAR neutral beam injector.
Jeong, Seung Ho; Chang, Doo-Hee; In, Sang Ryul; Lee, Kwang Won; Oh, Byung-Hoon; Yoon, Byung-Joo; Song, Woo Sob; Kim, Jinchoon; Kim, Tae Seong
2008-02-01
In this study it is found that the cusp magnetic field configuration of an anode bucket influences the primary electron behavior. An electron orbit code (ELEORBIT code) showed that an azimuthal line cusp (cusp lines run azimuthally with respect to the beam extraction direction) provides a longer primary electron confinement time than an axial line cusp configuration. Experimentally higher plasma densities were obtained under the same arc power when the azimuthal cusp chamber was used. The newly designed azimuthal cusp bucket has been investigated in an effort to increase the plasma density in its plasma generator per arc power.
Acoustic Efficiency of Azimuthal Modes in Jet Noise Using Chevron Nozzles
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Bridges, James
2006-01-01
The link between azimuthal modes in jet turbulence and in the acoustic sound field has been examined in cold, round jets. Chevron nozzles, however, impart an azimuthal structure on the jet with a shape dependent on the number, length and penetration angle of the chevrons. Two particular chevron nozzles, with 3 and 4 primary chevrons respectively, and a round baseline nozzle are compared at both cold and hot jet conditions to determine how chevrons impact the modal structure of the flow and how that change relates to the sound field. The results show that, although the chevrons have a large impact on the azimuthal shape of the mean axial velocity, the impact of chevrons on the azimuthal structure of the fluctuating axial velocity is small at the cold jet condition and smaller still at the hot jet condition. This is supported by results in the azimuthal structure of the sound field, which also shows little difference in between the two chevron nozzles and the baseline nozzle in the distribution of energy across the azimuthal modes measured.
Subsonic and Supersonic Jet Noise Calculations Using PSE and DNS
NASA Technical Reports Server (NTRS)
Balakumar, P.; Owis, Farouk
1999-01-01
Noise radiated from a supersonic jet is computed using the Parabolized Stability Equations (PSE) method. The evolution of the instability waves inside the jet is computed using the PSE method and the noise radiated to the far field from these waves is calculated by solving the wave equation using the Fourier transform method. We performed the computations for a cold supersonic jet of Mach number 2.1 which is excited by disturbances with Strouhal numbers St=.2 and .4 and the azimuthal wavenumber m=l. Good agreement in the sound pressure level are observed between the computed and the measured (Troutt and McLaughlin 1980) results.
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
Gu, Bing; Xu, Danfeng; Pan, Yang; Cui, Yiping
2014-07-01
Based on the vectorial Rayleigh-Sommerfeld integrals, the analytical expressions for azimuthal-variant vector fields diffracted by an annular aperture are presented. This helps us to investigate the propagation behaviors and the focusing properties of apertured azimuthal-variant vector fields under nonparaxial and paraxial approximations. The diffraction by a circular aperture, a circular disk, or propagation in free space can be treated as special cases of this general result. Simulation results show that the transverse intensity, longitudinal intensity, and far-field divergence angle of nonparaxially apertured azimuthal-variant vector fields depend strongly on the azimuthal index, the outer truncation parameter and the inner truncation parameter of the annular aperture, as well as the ratio of the waist width to the wavelength. Moreover, the multiple-ring-structured intensity pattern of the focused azimuthal-variant vector field, which originates from the diffraction effect caused by an annular aperture, is experimentally demonstrated.
Research progress and hotspot analysis of spatial interpolation
NASA Astrophysics Data System (ADS)
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
NASA Astrophysics Data System (ADS)
Basak, Anup; Levitas, Valery I.
2018-04-01
A thermodynamically consistent, novel multiphase phase field approach for stress- and temperature-induced martensitic phase transformations at finite strains and with interfacial stresses has been developed. The model considers a single order parameter to describe the austenite↔martensitic transformations, and another N order parameters describing N variants and constrained to a plane in an N-dimensional order parameter space. In the free energy model coexistence of three or more phases at a single material point (multiphase junction), and deviation of each variant-variant transformation path from a straight line have been penalized. Some shortcomings of the existing models are resolved. Three different kinematic models (KMs) for the transformation deformation gradient tensors are assumed: (i) In KM-I the transformation deformation gradient tensor is a linear function of the Bain tensors for the variants. (ii) In KM-II the natural logarithms of the transformation deformation gradient is taken as a linear combination of the natural logarithm of the Bain tensors multiplied with the interpolation functions. (iii) In KM-III it is derived using the twinning equation from the crystallographic theory. The instability criteria for all the phase transformations have been derived for all the kinematic models, and their comparative study is presented. A large strain finite element procedure has been developed and used for studying the evolution of some complex microstructures in nanoscale samples under various loading conditions. Also, the stresses within variant-variant boundaries, the sample size effect, effect of penalizing the triple junctions, and twinned microstructures have been studied. The present approach can be extended for studying grain growth, solidifications, para↔ferro electric transformations, and diffusive phase transformations.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
Cai, Yangjian; Lin, Qiang; Eyyuboğlu, Halil T; Baykal, Yahya
2008-05-26
Analytical formulas are derived for the average irradiance and the degree of polarization of a radially or azimuthally polarized doughnut beam (PDB) propagating in a turbulent atmosphere by adopting a beam coherence-polarization matrix. It is found that the radial or azimuthal polarization structure of a radially or azimuthally PDB will be destroyed (i.e., a radially or azimuthally PDB is depolarized and becomes a partially polarized beam) and the doughnut beam spot becomes a circularly Gaussian beam spot during propagation in a turbulent atmosphere. The propagation properties are closely related to the parameters of the beam and the structure constant of the atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
Color quality management in advanced flat panel display engines
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Neugebauer, Charles F.; Marnatti, David M.
2003-01-01
During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.
Holographic video at 40 frames per second for 4-million object points.
Tsang, Peter; Cheung, W-K; Poon, T-C; Zhou, C
2011-08-01
We propose a fast method for generating digital Fresnel holograms based on an interpolated wavefront-recording plane (IWRP) approach. Our method can be divided into two stages. First, a small, virtual IWRP is derived in a computational-free manner. Second, the IWRP is expanded into a Fresnel hologram with a pair of fast Fourier transform processes, which are realized with the graphic processing unit (GPU). We demonstrate state-of-the-art experimental results, capable of generating a 2048 x 2048 Fresnel hologram of around 4 × 10(6) object points at a rate of over 40 frames per second.
Wet-season spatial variability of N2O emissions from a tea field in subtropical central China
NASA Astrophysics Data System (ADS)
Fu, X.; Liu, X.; Li, Y.; Shen, J.; Wang, Y.; Zou, G.; Li, H.; Song, L.; Wu, J.
2015-01-01
Tea fields emit large amounts of nitrous oxide (N2O) to the atmosphere. Obtaining accurate estimations of N2O emissions from tea-planted soils is challenging due to strong spatial variability. We examined the spatial variability of N2O emissions from a red-soil tea field in Hunan province, China, on 22 April 2012 (in a wet season) using 147 static mini chambers approximately regular gridded in a 4.0 ha tea field. The N2O fluxes for a 30 min snapshot (10-10.30 a.m.) ranged from -1.73 to 1659.11 g N ha-1 d-1 and were positively skewed with an average flux of 102.24 g N ha-1 d-1. The N2O flux data were transformed to a normal distribution by using a logit function. The geostatistical analyses of our data indicated that the logit-transformed N2O fluxes (FLUX30t) exhibited strong spatial autocorrelation, which was characterized by an exponential semivariogram model with an effective range of 25.2 m. As observed in the wet season, the logit-transformed soil ammonium-N (NH4Nt), soil nitrate-N (NO3Nt), soil organic carbon (SOCt), total soil nitrogen (TSNt) were all found to be significantly correlated with FLUX30t (r=0.57-0.71, p<0.001). Three spatial interpolation methods (ordinary kriging, regression kriging and cokriging) were applied to estimate the spatial distribution of N2O emissions over the study area. Cokriging with NH4Nt and NO3Nt as covariables (r= 0.74 and RMSE =1.18) outperformed ordinary kriging (r= 0.18 and RMSE =1.74), regression kriging with the sample position as a predictor (r= 0.49 and RMSE =1.55) and cokriging with SOCt as a covariable (r= 0.58 and RMSE =1.44). The predictions of the three kriging interpolation methods for the total N2O emissions of the 4.0 ha tea field ranged from 148.2 to 208.1 g N d-1, based on the 30 min snapshots obtained during the wet season. Our findings suggested that to accurately estimate the total N2O emissions over a region, the environmental variables (e.g., soil properties) and the current land use pattern (e.g., tea row transects in the present study) must be included in spatial interpolation. Additionally, compared with other kriging approaches, the cokriging prediction approach showed great advantages in being easily deployed, and more importantly providing accurate regional estimation of N2O emissions from tea-planted soils.
Wet-season spatial variability in N2O emissions from a tea field in subtropical central China
NASA Astrophysics Data System (ADS)
Fu, X.; Liu, X.; Li, Y.; Shen, J.; Wang, Y.; Zou, G.; Li, H.; Song, L.; Wu, J.
2015-06-01
Tea fields emit large amounts of nitrous oxide (N2O) to the atmosphere. Obtaining accurate estimations of N2O emissions from tea-planted soils is challenging due to strong spatial variability. We examined the spatial variability in N2O emissions from a red-soil tea field in Hunan Province, China, on 22 April 2012 (in a wet season) using 147 static mini chambers approximately regular gridded in a 4.0 ha tea field. The N2O fluxes for a 30 min snapshot (10:00-10:30 a.m.) ranged from -1.73 to 1659.11 g N ha-1 d-1 and were positively skewed with an average flux of 102.24 g N ha-1 d-1. The N2O flux data were transformed to a normal distribution by using a logit function. The geostatistical analyses of our data indicated that the logit-transformed N2O fluxes (FLUX30t) exhibited strong spatial autocorrelation, which was characterized by an exponential semivariogram model with an effective range of 25.2 m. As observed in the wet season, the logit-transformed soil ammonium-N (NH4Nt), soil nitrate-N (NO3Nt), soil organic carbon (SOCt) and total soil nitrogen (TSNt) were all found to be significantly correlated with FLUX30t (r = 0.57-0.71, p < 0.001). Three spatial interpolation methods (ordinary kriging, regression kriging and cokriging) were applied to estimate the spatial distribution of N2O emissions over the study area. Cokriging with NH4Nt and NO3Nt as covariables (r = 0.74 and RMSE = 1.18) outperformed ordinary kriging (r = 0.18 and RMSE = 1.74), regression kriging with the sample position as a predictor (r = 0.49 and RMSE = 1.55) and cokriging with SOCt as a covariable (r = 0.58 and RMSE = 1.44). The predictions of the three kriging interpolation methods for the total N2O emissions of 4.0 ha tea field ranged from 148.2 to 208.1 g N d-1, based on the 30 min snapshots obtained during the wet season. Our findings suggested that to accurately estimate the total N2O emissions over a region, the environmental variables (e.g., soil properties) and the current land use pattern (e.g., tea row transects in the present study) must be included in spatial interpolation. Additionally, compared with other kriging approaches, the cokriging prediction approach showed great advantages in being easily deployed and, more importantly, providing accurate regional estimation of N2O emissions from tea-planted soils.
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
NASA Astrophysics Data System (ADS)
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
Angular Momentum Content of the ρ Meson in Lattice QCD
NASA Astrophysics Data System (ADS)
Glozman, Leonid Ya.; Lang, C. B.; Limmer, Markus
2009-09-01
The variational method allows one to study the mixing of interpolators with different chiral transformation properties in the nonperturbatively determined physical state. It is then possible to define and calculate in a gauge-invariant manner the chiral as well as the partial wave content of the quark-antiquark component of a meson in the infrared, where mass is generated. Using a unitary transformation from the chiral basis to the LJ2S+1 basis one may extract a partial wave content of a meson. We present results for the ground state of the ρ meson using quenched simulations as well as simulations with nf=2 dynamical quarks, all for lattice spacings close to 0.15 fm. We point out that these results indicate a simple S13-wave composition of the ρ meson in the infrared, like in the SU(6) flavor-spin quark model.
The OpenCalphad thermodynamic software interface.
Sundman, Bo; Kattner, Ursula R; Sigli, Christophe; Stratmann, Matthias; Le Tellier, Romain; Palumbo, Mauro; Fries, Suzana G
2016-12-01
Thermodynamic data are needed for all kinds of simulations of materials processes. Thermodynamics determines the set of stable phases and also provides chemical potentials, compositions and driving forces for nucleation of new phases and phase transformations. Software to simulate materials properties needs accurate and consistent thermodynamic data to predict metastable states that occur during phase transformations. Due to long calculation times thermodynamic data are frequently pre-calculated into "lookup tables" to speed up calculations. This creates additional uncertainties as data must be interpolated or extrapolated and conditions may differ from those assumed for creating the lookup table. Speed and accuracy requires that thermodynamic software is fully parallelized and the Open-Calphad (OC) software is the first thermodynamic software supporting this feature. This paper gives a brief introduction to computational thermodynamics and introduces the basic features of the OC software and presents four different application examples to demonstrate its versatility.
The OpenCalphad thermodynamic software interface
Sundman, Bo; Kattner, Ursula R; Sigli, Christophe; Stratmann, Matthias; Le Tellier, Romain; Palumbo, Mauro; Fries, Suzana G
2017-01-01
Thermodynamic data are needed for all kinds of simulations of materials processes. Thermodynamics determines the set of stable phases and also provides chemical potentials, compositions and driving forces for nucleation of new phases and phase transformations. Software to simulate materials properties needs accurate and consistent thermodynamic data to predict metastable states that occur during phase transformations. Due to long calculation times thermodynamic data are frequently pre-calculated into “lookup tables” to speed up calculations. This creates additional uncertainties as data must be interpolated or extrapolated and conditions may differ from those assumed for creating the lookup table. Speed and accuracy requires that thermodynamic software is fully parallelized and the Open-Calphad (OC) software is the first thermodynamic software supporting this feature. This paper gives a brief introduction to computational thermodynamics and introduces the basic features of the OC software and presents four different application examples to demonstrate its versatility. PMID:28260838
Angular momentum content of the rho meson in lattice QCD.
Glozman, Leonid Ya; Lang, C B; Limmer, Markus
2009-09-18
The variational method allows one to study the mixing of interpolators with different chiral transformation properties in the nonperturbatively determined physical state. It is then possible to define and calculate in a gauge-invariant manner the chiral as well as the partial wave content of the quark-antiquark component of a meson in the infrared, where mass is generated. Using a unitary transformation from the chiral basis to the ;{2S+1}L_{J} basis one may extract a partial wave content of a meson. We present results for the ground state of the rho meson using quenched simulations as well as simulations with n_{f} = 2 dynamical quarks, all for lattice spacings close to 0.15 fm. We point out that these results indicate a simple ;{3}S_{1}-wave composition of the rho meson in the infrared, like in the SU(6) flavor-spin quark model.
Solar physics applications of computer graphics and image processing
NASA Technical Reports Server (NTRS)
Altschuler, M. D.
1985-01-01
Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, B.C.J.; Sha, W.T.; Doria, M.L.
1980-11-01
The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less
An efficient and accurate molecular alignment and docking technique using ab initio quality scoring
Füsti-Molnár, László; Merz, Kenneth M.
2008-01-01
An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561
Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation
Song, Genxin; Zhang, Jing; Wang, Ke
2014-01-01
In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling.
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-07-14
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-01-01
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974
Monotonicity preserving splines using rational cubic Timmer interpolation
NASA Astrophysics Data System (ADS)
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
NASA Astrophysics Data System (ADS)
Yuan, K.; Beghein, C.
2018-04-01
Seismic anisotropy is a powerful tool to constrain mantle deformation, but its existence in the deep upper mantle and topmost lower mantle is still uncertain. Recent results from higher mode Rayleigh waves have, however, revealed the presence of 1 per cent azimuthal anisotropy between 300 and 800 km depth, and changes in azimuthal anisotropy across the mantle transition zone boundaries. This has important consequences for our understanding of mantle convection patterns and deformation of deep mantle material. Here, we propose a Bayesian method to model depth variations in azimuthal anisotropy and to obtain quantitative uncertainties on the fast seismic direction and anisotropy amplitude from phase velocity dispersion maps. We applied this new method to existing global fundamental and higher mode Rayleigh wave phase velocity maps to assess the likelihood of azimuthal anisotropy in the deep upper mantle and to determine whether previously detected changes in anisotropy at the transition zone boundaries are robustly constrained by those data. Our results confirm that deep upper-mantle azimuthal anisotropy is favoured and well constrained by the higher mode data employed. The fast seismic directions are in agreement with our previously published model. The data favour a model characterized, on average, by changes in azimuthal anisotropy at the top and bottom of the transition zone. However, this change in fast axes is not a global feature as there are regions of the model where the azimuthal anisotropy direction is unlikely to change across depths in the deep upper mantle. We were, however, unable to detect any clear pattern or connection with surface tectonics. Future studies will be needed to further improve the lateral resolution of this type of model at transition zone depths.
Use of the azimuthal resistivity technique for determination of regional azimuth of transmissivity
Carlson, D.
2010-01-01
Many bedrock units contain joint sets that commonly act as preferred paths for the movement of water, electrical charge, and possible contaminants associated with production or transit of crude oil or refined products. To facilitate the development of remediation programs, a need exists to reliably determine regional-scale properties of these joint sets: azimuth of transmissivity ellipse, dominant set, and trend(s). The surface azimuthal electrical resistivity survey method used for local in situ studies can be a noninvasive, reliable, efficient, and relatively cost-effective method for regional studies. The azimuthal resistivity survey method combines the use of standard resistivity equipment with a Wenner array rotated about a fixed center point, at selected degree intervals, which yields an apparent resistivity ellipse from which joint-set orientation can be determined. Regional application of the azimuthal survey method was tested at 17 sites in an approximately 500 km2 (193 mi2) area around Milwaukee, Wisconsin, with less than 15m (50 ft) overburden above the dolomite. Results of 26 azimuthal surveys were compared and determined to be consistent with the results of two other methods: direct observation of joint-set orientation and transmissivity ellipses from multiple-well-aquifer tests. The average of joint-set trend determined by azimuthal surveys is within 2.5?? of the average of joint-set trend determined by direct observation of major joint sets at 24 sites. The average of maximum of transmissivity trend determined by azimuthal surveys is within 5.7?? of the average of maximum of transmissivity trend determined for 14 multiple-well-aquifer tests. Copyright ?? 2010 The American Association of Petroleum Geologists/Division of Environmental Geosciences. All rights reserved.
Decomposition of fluctuating initial conditions and flow harmonics
NASA Astrophysics Data System (ADS)
Qian, Wei-Liang; Mota, Philipe; Andrade, Rone; Gardim, Fernando; Grassi, Frédérique; Hama, Yogiro; Kodama, Takeshi
2014-01-01
Collective flow observed in heavy-ion collisions is largely attributed to initial geometrical fluctuations, and it is the hydrodynamic evolution of the system that transforms those initial spatial irregularities into final state momentum anisotropies. Cumulant analysis provides a mathematical tool to decompose those initial fluctuations in terms of radial and azimuthal components. It is usually thought that a specified order of azimuthal cumulant, for the most part, linearly produces flow harmonics of the same order. In this work, by considering the most central collisions (0%-5%), we carry out a systematic study on the connection between cumulants and flow harmonics using a hydrodynamic code called NeXSPheRIO. We conduct three types of calculation, by explicitly decomposing the initial conditions into components corresponding to a given eccentricity and studying the out-coming flow through hydrodynamic evolution. It is found that for initial conditions deviating significantly from Gaussian, such as those from NeXuS, the linearity between eccentricities and flow harmonics partially breaks down. Combined with the effect of coupling between cumulants of different orders, it causes the production of extra flow harmonics of higher orders. We argue that these results can be seen as a natural consequence of the non-linear nature of hydrodynamics, and they can be understood intuitively in terms of the peripheral-tube model.
Orientation of liquid crystalline blue phases on unidirectionally orienting surfaces
NASA Astrophysics Data System (ADS)
Takahashi, Misaki; Ohkawa, Takuma; Yoshida, Hiroyuki; Fukuda, Jun-ichi; Kikuchi, Hirostugu; Ozaki, Masanori
2018-03-01
Liquid crystalline cholesteric blue phases (BPs) continue to attract interest due to their fast response times and quasi-polarization-independent phase modulation capabilities. Various approaches have recently been proposed to control the crystal orientation of BPs on substrates; however, their basic orientation properties on standard, unidirectionally orienting alignment layers have not been investigated in detail. Through analysis of the azimuthal orientation of Kossel diagrams, we study the 3D crystal orientation of a BP material—with a phase sequence of cholesteric, BP I, and BP II—on unidirectionally orienting surfaces prepared using two methods: rubbing and photoalignment. BP II grown from the isotropic phase is sensitive to surface conditions, with different crystal planes orienting on the two substrates. On the other hand, strong thermal hysteresis is observed in BPs grown through a different liquid crystal phase, implying that the preceding structure determines the orientation. More specifically, the BP II-I transition is accompanied by a rotation of the crystal such that the crystal direction defined by certain low-value Miller indices transform into different directions, and within the allowed rotations, different azimuthal configurations are obtained in the same cell depending on the thermal process. Our findings demonstrate that, for the alignment control of BPs, the thermal process is as important as the properties of the alignment layer.
Time domain SAR raw data simulation using CST and image focusing of 3D objects
NASA Astrophysics Data System (ADS)
Saeed, Adnan; Hellwich, Olaf
2017-10-01
This paper presents the use of a general purpose electromagnetic simulator, CST, to simulate realistic synthetic aperture radar (SAR) raw data of three-dimensional objects. Raw data is later focused in MATLAB using range-doppler algorithm. Within CST Microwave Studio a replica of TerraSAR-X chirp signal is incident upon a modeled Corner Reflector (CR) whose design and material properties are identical to that of the real one. Defining mesh and other appropriate settings reflected wave is measured at several distant points within a line parallel to the viewing direction. This is analogous to an array antenna and is synthesized to create a long aperture for SAR processing. The time domain solver in CST is based on the solution of differential form of Maxwells equations. Exported data from CST is arranged into a 2-d matrix of axis range and azimuth. Hilbert transform is applied to convert the real signal to complex data with phase information. Range compression, range cell migration correction (RCMC), and azimuth compression are applied in time domain to obtain the final SAR image. This simulation can provide valuable information to clarify which real world objects cause images suitable for high accuracy identification in the SAR images.
LIP: The Livermore Interpolation Package, Version 1.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-07-06
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
LIP: The Livermore Interpolation Package, Version 1.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-01-04
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
Zonal Flows and Long-lived Axisymmetric Pressure Bumps in Magnetorotational Turbulence
NASA Astrophysics Data System (ADS)
Johansen, A.; Youdin, A.; Klahr, H.
2009-06-01
We study the behavior of magnetorotational turbulence in shearing box simulations with a radial and azimuthal extent up to 10 scale heights. Maxwell and Reynolds stresses are found to increase by more than a factor of 2 when increasing the box size beyond two scale heights in the radial direction. Further increase of the box size has little or no effect on the statistical properties of the turbulence. An inverse cascade excites magnetic field structures at the largest scales of the box. The corresponding 10% variation in the Maxwell stress launches a zonal flow of alternating sub- and super-Keplerian velocity. This, in turn, generates a banded density structure in geostrophic balance between pressure and Coriolis forces. We present a simplified model for the appearance of zonal flows, in which stochastic forcing by the magnetic tension on short timescales creates zonal flow structures with lifetimes of several tens of orbits. We experiment with various improved shearing box algorithms to reduce the numerical diffusivity introduced by the supersonic shear flow. While a standard finite difference advection scheme shows signs of a suppression of turbulent activity near the edges of the box, this problem is eliminated by a new method where the Keplerian shear advection is advanced in time by interpolation in Fourier space.
Vision based obstacle detection and grouping for helicopter guidance
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Chatterji, Gano
1993-01-01
Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Novel view synthesis by interpolation over sparse examples
NASA Astrophysics Data System (ADS)
Liang, Bodong; Chung, Ronald C.
2006-01-01
Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.
LIFT a future atmospheric chemistry sensor
NASA Astrophysics Data System (ADS)
Pailharey, E.; Châteauneuf, F.; Aminou, D.
2017-11-01
Natural and anthropogenic trace constituents play an important role for the ozone budget and climate as well as in other problems of the environment. In order to prevent the dramatic impact of any climate change, exchange processes between the stratosphere and troposphere as well as the distribution and deposition of tropospheric trace constituents are investigated. The Limb Infrared Fourier Transform spectrometer (LIFT) will globally provide calibrated spectra of the atmosphere as a function of the tangent altitude. LIFT field of view will be 30 km × 30 km. The resolution is 30 km in azimuth corresponding to the full field of view, and 2 km in elevation, obtained by using a matrix of 15×15 detectors. The instrument will cover the spectral domain 5.7-14.7 μm through 2 different bands respectively 13.0-9.5 μm, 9.5-5.7 μm. With a spectral resolution of 0.1 cm-1, LIFT is a high class Fourier Transform Spectrometer compliant with the challenging constraints of limb viewing and spaceborne implementation.
TU-CD-BRA-01: A Novel 3D Registration Method for Multiparametric Radiological Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhbardeh, A; Parekth, VS; Jacobs, MA
2015-06-15
Purpose: Multiparametric and multimodality radiological imaging methods, such as, magnetic resonance imaging(MRI), computed tomography(CT), and positron emission tomography(PET), provide multiple types of tissue contrast and anatomical information for clinical diagnosis. However, these radiological modalities are acquired using very different technical parameters, e.g.,field of view(FOV), matrix size, and scan planes, which, can lead to challenges in registering the different data sets. Therefore, we developed a hybrid registration method based on 3D wavelet transformation and 3D interpolations that performs 3D resampling and rotation of the target radiological images without loss of information Methods: T1-weighted, T2-weighted, diffusion-weighted-imaging(DWI), dynamic-contrast-enhanced(DCE) MRI and PET/CT were usedmore » in the registration algorithm from breast and prostate data at 3T MRI and multimodality(PET/CT) cases. The hybrid registration scheme consists of several steps to reslice and match each modality using a combination of 3D wavelets, interpolations, and affine registration steps. First, orthogonal reslicing is performed to equalize FOV, matrix sizes and the number of slices using wavelet transformation. Second, angular resampling of the target data is performed to match the reference data. Finally, using optimized angles from resampling, 3D registration is performed using similarity transformation(scaling and translation) between the reference and resliced target volume is performed. After registration, the mean-square-error(MSE) and Dice Similarity(DS) between the reference and registered target volumes were calculated. Results: The 3D registration method registered synthetic and clinical data with significant improvement(p<0.05) of overlap between anatomical structures. After transforming and deforming the synthetic data, the MSE and Dice similarity were 0.12 and 0.99. The average improvement of the MSE in breast was 62%(0.27 to 0.10) and prostate was 63%(0.13 to 0.04;p<0.05). The Dice similarity was in breast 8%(0.91 to 0.99) and for prostate was 89%(0.01 to 0.90;p<0.05) Conclusion: Our 3D wavelet hybrid registration approach registered diverse breast and prostate data of different radiological images(MR/PET/CT) with a high accuracy.« less
Evaluation of burst-mode LDA spectra with implications
NASA Astrophysics Data System (ADS)
Velte, Clara; George, William
2009-11-01
Burst-mode LDA spectra, as described in [1], are compared to spectra obtained from corresponding HWA measurements using the FFT in a round jet and cylinder wake experiment. The phrase ``burst-mode LDA'' refers to an LDA which operates with at most one particle present in the measuring volume at a time. Due to the random sampling and velocity bias of the LDA signal, the Direct Fourier Transform with accompanying weighting by the measured residence times was applied to obtain a correct interpretation of the spectral estimate. Further, the self-noise was removed as described in [2]. In addition, resulting spectra from common interpolation and uniform resampling techniques are compared to the above mentioned estimates. The burst-mode LDA spectra are seen to concur well with the HWA spectra up to the emergence of the noise floor, caused mainly by the intermittency of the LDA signal. The interpolated and resampled counterparts yield unphysical spectra, which are buried in frequency dependent noise and step noise, except at very high LDA data rates where they perform well up to a limited frequency.[4pt] [1] Buchhave, P. PhD Thesis, SUNY/Buffalo, 1979.[0pt] [2] Velte, C.M. PhD Thesis, DTU/Copenhagen, 2009.
What convention is used for the illumination and view angles?
Atmospheric Science Data Center
2014-12-08
... Azimuth angles are measured clockwise from the direction of travel to local north. For both the Sun and cameras, azimuth describes the ... to the equator, because of its morning equator crossing time. Additionally, the difference in view and solar azimuth angle will be near ...
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
3-d interpolation in object perception: evidence from an objective performance paradigm.
Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana
2005-06-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).
Receive Mode Analysis and Design of Microstrip Reflectarrays
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam
2011-01-01
Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.
NASA Astrophysics Data System (ADS)
Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif
2018-02-01
Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.
C-point and V-point singularity lattice formation and index sign conversion methods
NASA Astrophysics Data System (ADS)
Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.
2017-06-01
The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an orderly fashion. It shows degeneracy as long as the SOPs of the three beams are drawn from polarization distributions that have Poincare-Hopf index of same magnitude. Various topological aspects of these lattices are presented with the help of Stokes field S12, which is constructed using generalized Stokes parameters of a fully polarized light. We envisage that such polarization lattice structure may lead to novel concept of structured polarization illumination methods in super resolution microscopy.
Xiao, Hui; Sun, Ke; Sun, Ye; Wei, Kangli; Tu, Kang; Pan, Leiqing
2017-11-22
Near-infrared (NIR) spectroscopy was applied for the determination of total soluble solid contents (SSC) of single Ruby Seedless grape berries using both benchtop Fourier transform (VECTOR 22/N) and portable grating scanning (SupNIR-1500) spectrometers in this study. The results showed that the best SSC prediction was obtained by VECTOR 22/N in the range of 12,000 to 4000 cm -1 (833-2500 nm) for Ruby Seedless with determination coefficient of prediction (R p ²) of 0.918, root mean squares error of prediction (RMSEP) of 0.758% based on least squares support vector machine (LS-SVM). Calibration transfer was conducted on the same spectral range of two instruments (1000-1800 nm) based on the LS-SVM model. By conducting Kennard-Stone (KS) to divide sample sets, selecting the optimal number of standardization samples and applying Passing-Bablok regression to choose the optimal instrument as the master instrument, a modified calibration transfer method between two spectrometers was developed. When 45 samples were selected for the standardization set, the linear interpolation-piecewise direct standardization (linear interpolation-PDS) performed well for calibration transfer with R p ² of 0.857 and RMSEP of 1.099% in the spectral region of 1000-1800 nm. And it was proved that re-calculating the standardization samples into master model could improve the performance of calibration transfer in this study. This work indicated that NIR could be used as a rapid and non-destructive method for SSC prediction, and provided a feasibility to solve the transfer difficulty between totally different NIR spectrometers.
Using Google SketchUp to simulate tree row azimuth effects on alley shading
USDA-ARS?s Scientific Manuscript database
Effect of row azimuth on alley crop illumination is difficult to determine empirically. Our objective was to determine if Google SketchUp (Trimble Inc., Sunnyvale, CA) could be used to simulate effect of azimuth orientation on illumination of loblolly pine (Pinus taeda L.) alleys. Simulations were...
Nonparaxial and paraxial focusing of azimuthal-variant vector beams.
Gu, Bing; Cui, Yiping
2012-07-30
Based on the vectorial Rayleigh-Sommerfeld formulas under the weak nonparaxial approximation, we investigate the propagation behavior of a lowest-order Laguerre-Gaussian beam with azimuthal-variant states of polarization. We present the analytical expressions for the radial, azimuthal, and longitudinal components of the electric field with an arbitrary integer topological charge m focused by a nonaperturing thin lens. We illustrate the three-dimensional optical intensities, energy flux distributions, beam waists, and focal shifts of the focused azimuthal-variant vector beams under the nonparaxial and paraxial approximations.
Synthetic aperture radar images with composite azimuth resolution
Bielek, Timothy P; Bickel, Douglas L
2015-03-31
A synthetic aperture radar (SAR) image is produced by using all phase histories of a set of phase histories to produce a first pixel array having a first azimuth resolution, and using less than all phase histories of the set to produce a second pixel array having a second azimuth resolution that is coarser than the first azimuth resolution. The first and second pixel arrays are combined to produce a third pixel array defining a desired SAR image that shows distinct shadows of moving objects while preserving detail in stationary background clutter.
NASA Astrophysics Data System (ADS)
Tu, Zhoudunming
2018-01-01
Studies of charge-dependent azimuthal correlations for the same- and oppositesign particle pairs are presented in PbPb collisions at 5 TeV and pPb collisions at 5 and 8.16 TeV, with the CMS experiment at the LHC. The azimuthal correlations are evaluated with respect to the second- and also higher-order event planes, as a function of particle pseudorapidity and transverse momentum, and event multiplicity. By employing an event-shape engineering technique, the dependence of correlations on azimuthal anisotropy flow is investigated. Results presented provide new insights to the origin of observed charge-dependent azimuthal correlations, and have important implications to the search for the chiral magnetic effect in heavy ion collisions.
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
3D Imaging Millimeter Wave Circular Synthetic Aperture Radar
Zhang, Renyuan; Cao, Siyang
2017-01-01
In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140
NASA Astrophysics Data System (ADS)
Artukh, A. G.; Tarantin, N. I.
Proposed is an in-flight measurement method of recoil nuclei masses with the help of a Penning trap located behind the COMBAS magnetic separator for nuclear reaction products. The method is based on the following operations: (i) Accepting the recoil nuclear reaction products by the magnetic separator and decreasing their kinetic energy by degraders. (ii) In-flight transportation of the retarded nuclei into the magnetic field of the Penning trap's solenoid and transforming their remaining longitudinal momentum into orbital rotation by the fringing magnetic field of the solenoid. (iii) Cooling the orbital rotation of the ions by the high-frequency azimuthal electric field of the Penning trap's electric hyperboloid.
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
NASA Astrophysics Data System (ADS)
Liu, Xiwu; Guo, Zhiqi; Han, Xu
2018-06-01
A set of parallel vertical fractures embedded in a vertically transverse isotropy (VTI) background leads to orthorhombic anisotropy and corresponding azimuthal seismic responses. We conducted seismic modeling of full waveform amplitude variations versus azimuth (AVAZ) responses of anisotropic shale by integrating a rock physics model and a reflectivity method. The results indicate that the azimuthal variation of P-wave velocity tends to be more complicated for orthorhombic medium compared to the horizontally transverse isotropy (HTI) case, especially at high polar angles. Correspondingly, for the HTI layer in the theoretical model, the short axis of the azimuthal PP amplitudes at the top interface is parallel to the fracture strike, while the long axis at the bottom reflection directs the fracture strike. In contrast, the orthorhombic layer in the theoretical model shows distinct AVAZ responses in terms of PP reflections. Nevertheless, the azimuthal signatures of the R- and T-components of the mode-converted PS reflections show similar AVAZ features for the HTI and orthorhombic layers, which may imply that the PS responses are dominated by fractures. For the application to real data, a seismic-well tie based on upscaled data and a reflectivity method illustrate good agreement between the reference layers and the corresponding reflected events. Finally, the full waveform seismic AVAZ responses of the Longmaxi shale formation are computed for the cases of HTI and orthorhombic anisotropy for comparison. For the two cases, the azimuthal features represent differences mainly in amplitudes, while slightly in the phases of the reflected waveforms. Azimuth variations in the PP reflections from the reference layers show distinct behaviors for the HTI and orthorhombic cases, while the mode-converted PS reflections in terms of the R- and T-components show little differences in azimuthal features. It may suggest that the behaviors of the PS waves are dominated by vertically aligned fractures. This work provides further insight into the azimuthal seismic response of orthorhombic shales. The proposed method may help to improve the seismic-well tie, seismic interpretation, and inversion results using an azimuth anisotropy dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, D. L.; Qiu, X. M.; Geng, S. F.
The numerical simulation described in our paper [D. L. Tang et al., Phys. Plasmas 19, 073519 (2012)] shows a rotating dense plasma structure, which is the critical characteristic of the rotating spoke. The simulated rotating spoke has a frequency of 12.5 MHz with a rotational speed of {approx}1.0 Multiplication-Sign 10{sup 6} m/s on the surface of the anode. Accompanied by the almost uniform azimuthal ion distribution, the non-axisymmetric electron distribution introduces two azimuthal electric fields with opposite directions. The azimuthal electric fields have the same rotational frequency and speed together with the rotating spoke. The azimuthal electric fields excite themore » axial electron drift upstream and downstream due to the additional E{sub {theta}} x B field and then the axial shear flow is generated. The axial local charge separation induced by the axial shear electron flow may be compensated by the azimuthal electron transport, finally resulting in the azimuthal electric field rotation and electron transport with the rotating spoke.« less
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
NASA Astrophysics Data System (ADS)
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm
ERIC Educational Resources Information Center
Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana
2005-01-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Borak, Jordan S.
2008-01-01
Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.
Zhang, Hua; Sonke, Jan-Jakob
2013-01-01
Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
Ehrhardt, J; Säring, D; Handels, H
2007-01-01
Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results.
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation results. PMID:25192146
Fine structures of azimuthal correlations of two gluons in the glasma
NASA Astrophysics Data System (ADS)
Zhang, Hengying; Zhang, Donghai; Zhao, Yeyin; Xu, Mingmei; Pan, Xue; Wu, Yuanfang
2018-02-01
We investigate the azimuthal correlations of the glasma in p-p collisions at √{sNN}=7 TeV by using the color glass condensate (CGC) formalism. As expected, the azimuthal correlations show two peaks at Δ ϕ =0 and π , which represent collimation production in the CGC. Beyond that, azimuthal correlations show fine structures, i.e., bumps or shoulders between the two peaks, when at least one gluon has small x . The structures are demonstrated to be associated with saturation momentum and likely appear at transverse momentum around 2 Qsp=1.8 GeV /c .
Ma, Guo -Lang; Wang, Xin -Nian
2012-01-01
In the framework of a multi-phase transport model, initial fluctuations in the transverse parton density lead to all orders of harmonic flows. Hadron-triggered azimuthal correlations include all contributions from harmonic flows, hot spots, and jet-medium excitations, which are isolated by using different initial conditions. We found that different physical components dominate different pseudorapidity ranges of dihadron correlations. Because γ-triggered azimuthal correlations can only be caused by jet-medium interactions, a comparative study of hadron- and γ -triggered azimuthal correlations can reveal more dynamics about jet-medium interactions.
NASA Astrophysics Data System (ADS)
Fang, Jiannong; Porté-Agel, Fernando
2016-09-01
Accurate modeling of complex terrain, especially steep terrain, in the simulation of wind fields remains a challenge. It is well known that the terrain-following coordinate transformation method (TFCT) generally used in atmospheric flow simulations is restricted to non-steep terrain with slope angles less than 45 degrees. Due to the advantage of keeping the basic computational grids and numerical schemes unchanged, the immersed boundary method (IBM) has been widely implemented in various numerical codes to handle arbitrary domain geometry including steep terrain. However, IBM could introduce considerable implementation errors in wall modeling through various interpolations because an immersed boundary is generally not co-located with a grid line. In this paper, we perform an intercomparison of TFCT and IBM in large-eddy simulation of a turbulent wind field over a three-dimensional (3D) hill for the purpose of evaluating the implementation errors in IBM. The slopes of the three-dimensional hill are not steep and, therefore, TFCT can be applied. Since TFCT is free from interpolation-induced implementation errors in wall modeling, its results can serve as a reference for the evaluation so that the influence of errors from wall models themselves can be excluded. For TFCT, a new algorithm for solving the pressure Poisson equation in the transformed coordinate system is proposed and first validated for a laminar flow over periodic two-dimensional hills by comparing with a benchmark solution. For the turbulent flow over the 3D hill, the wind-tunnel measurements used for validation contain both vertical and horizontal profiles of mean velocities and variances, thus allowing an in-depth comparison of the numerical models. In this case, TFCT is expected to be preferable to IBM. This is confirmed by the presented results of comparison. It is shown that the implementation errors in IBM lead to large discrepancies between the results obtained by TFCT and IBM near the surface. The effects of different schemes used to implement wall boundary conditions in IBM are studied. The source of errors and possible ways to improve the IBM implementation are discussed.
NASA Astrophysics Data System (ADS)
Jaenisch, Holger; Handley, James
2013-06-01
We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Hermite-Birkhoff interpolation in the nth roots of unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.
1980-06-01
Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
NASA Astrophysics Data System (ADS)
Yang, Sung Mo; Hong, Sera; Kim, Sang Youl
2018-05-01
We introduce a simple method to determine the in-plane birefringence of transparent flexible films by using transmission spectroscopic ellipsometry. The pseudo-ellipsometric constants which can represent their sample azimuthal angle dependent characteristics are introduced. The effect of in-plane birefringence and sample azimuthal angle on the pseudo ellipsometric constants is calculated using Jones matrix formalism, and the observed sample azimuthal angle dependence of measured pseudo-ellipsometric data is well understood. Wavelength dependence of in-plane birefringence is expressed in terms of the Sellmeier dispersion equation. The best fit pseudo-ellipsometric spectra to the measured ones at the sample azimuthal angles of every 15° from 0 to 90° are searched. The dispersion coefficients of the Sellmeier equation and the azimuthal angle of the optic axis are determined for polycarbonate (PC), poly(ethylene naphthalate) (PEN), poly(ethylene terephthalate) (PET), polyimide (PI), and colorless polyimide (CPI) films.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
An efficient interpolation filter VLSI architecture for HEVC standard
NASA Astrophysics Data System (ADS)
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
A rational interpolation method to compute frequency response
NASA Technical Reports Server (NTRS)
Kenney, Charles; Stubberud, Stephen; Laub, Alan J.
1993-01-01
A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.
Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media
2015-09-24
TRAC-M-TM-15-031 September 2015 Conflict Prediction Through Geo-Spatial Interpolation of Radicalization in Syrian Social Media ...Spatial Interpolation of Radicalization in Syrian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Spatial Interpolation of Radicalization in Syrian Social 1RAC Project Code 060114 Media 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING OR
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Minimal norm constrained interpolation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Irvine, L. D.
1985-01-01
In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.
Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu
2016-08-01
The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.
Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.
Zhang, Xiangjun; Wu, Xiaolin
2008-06-01
The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.
Enhancement of panoramic image resolution based on swift interpolation of Bezier surface
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Yang, Guo-guang; Bai, Jian
2007-01-01
Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.
Interpolation problem for the solutions of linear elasticity equations based on monogenic functions
NASA Astrophysics Data System (ADS)
Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii
2017-11-01
Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters.
Bindima, Thayyil; Elias, Elizabeth
2016-05-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters
Bindima, Thayyil; Elias, Elizabeth
2016-01-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739
A Radiation Solver for the National Combustion Code
NASA Technical Reports Server (NTRS)
Sockol, Peter M.
2015-01-01
A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Comparing interpolation techniques for annual temperature mapping across Xinjiang region
NASA Astrophysics Data System (ADS)
Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang
2016-11-01
Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.
On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians
NASA Astrophysics Data System (ADS)
Valverde, Clodoaldo; Baseia, Basílio
2018-01-01
We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model (JCM) and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications, we discuss a paradox raised in literature and compare the time evolution of the photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.
Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
Quasi interpolation with Voronoi splines.
Mirzargar, Mahsa; Entezari, Alireza
2011-12-01
We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard
2014-06-01
The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, T; Koo, T
Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetectormore » distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the bicubic interpolation for head and neck IMRT delivery. This work was supported by Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations
2008-02-01
Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates
Zhou, Rui; Sun, Jinping; Hu, Yuxin; Qi, Yaolong
2018-01-31
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm.
Zhou, Rui; Hu, Yuxin; Qi, Yaolong
2018-01-01
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm. PMID:29385059
The system spatial-frequency filtering of birefringence images of human blood layers
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Boychuk, T. M.; Mincer, O. P.; Angelsky, P. O.; Bodnar, N. B.; Oleinichenko, B. P.; Bizer, L. I.
2013-09-01
Among various opticophysical methods [1 - 3] of diagnosing the structure and properties of the optical anisotropic component of various biological objects a specific trend has been singled out - multidimensional laser polarimetry of microscopic images of the biological tissues with the following statistic, correlative and fractal analysis of the coordinate distributions of the azimuths and ellipticity of polarization in approximating of linear birefringence polycrystalline protein networks [4 - 10]. At the same time, in most cases, experimental obtaining of tissue sample is a traumatic biopsy operation. In addition, the mechanisms of transformation of the state of polarization of laser radiation by means of the opticoanisotropic biological structures are more varied (optical dichroism, circular birefringence). Hereat, real polycrystalline networks can be formed by different types, both in size and optical properties of biological crystals. Finally, much more accessible for an experimental investigation are biological fluids such as blood, bile, urine, and others. Thus, further progress of laser polarimetry can be associated with the development of new methods of analysis and processing (selection) of polarization- heterogeneous images of biological tissues and fluids, taking into account a wider set of mechanisms anisotropic mechanisms. Our research is aimed at developing experimental method of the Fourier polarimetry and a spatialfrequency selection for distributions of the azimuth and the ellipticity polarization of blood plasma laser images with a view of diagnosing prostate cancer.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
An analysis of human-induced land transformations in the San Francisco Bay/Sacramento area
Kirtland, David A.; Gaydos, L.J.; Clarke, Keith; DeCola, Lee; Acevedo, William; Bell, Cindy
1994-01-01
Part of the U.S. Geological Survey's Global Change Research Program involvesstudying the area from the Pacific Ocean to the Sierra foothills to enhance understanding ofthe role that human activities play in global change. The study investigates the ways thathumans transform the land and the effects that changing the landscape may have on regionaland global systems. To accomplish this research, scientists are compiling records ofhistorical transformations in the region's land cover over the last 140 years, developing asimulation model to predict land cover change, and assembling a digital data set to analyzeand describe land transformations. The historical data regarding urban growth focusattention on the significant change the region underwent from 1850 to 1990. Animation isused to visualize a time series of the change in land cover. The historical change is beingused to calibrate a prototype cellular automata model, developed to predict changes in urbanland cover 100 years into the future. Future urban growth scenarios will be developed foranalyzing possible human-induced impacts on land cover at a regional scale. These data aidin documenting and understanding human-induced land transformations from both historical andpredictive perspectives. A descriptive analysis of the region is used to investigate therelationships among data characteristic of the region. These data consist of multilayertopography, climate, vegetation, and population data for a 256-km2 region of centralCalifornia. A variety of multivariate analysis tools are used to integrate the data inraster format from map contours, interpolated climate observations, satellite observations,and population estimates.
Contrast-guided image interpolation.
Wei, Zhe; Ma, Kai-Kuang
2013-11-01
In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
Study and application of acoustic emission testing in fault diagnosis of low-speed heavy-duty gears.
Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei
2011-01-01
Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis.
NASA Astrophysics Data System (ADS)
Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei
2015-01-01
This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.
Study and Application of Acoustic Emission Testing in Fault Diagnosis of Low-Speed Heavy-Duty Gears
Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei
2011-01-01
Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis. PMID:22346592
The Astronomical Instrument, So-Gahui Invented During King Sejong Period
NASA Astrophysics Data System (ADS)
Lee, Yong-Sam Lee; Kim, Sang-Hyuk
2002-09-01
So-ganui, namely small simplified armillary sphere, was invented as an astronomical instrument by Lee Cheon, Jeong Cho, Jung In-Ji under 16 years' rule of King Sejong. We collect records and observed data on So-ganui. It is designed to measure position of celestial sphere and to determine time. It also can be transformed equatorial to horizontal, and horizontal to equatorial coordinate. It can measure the right ascension, declination, altitude and azimuth. It is composed of Sayu-hwan (Four displacements), Jeokdo-hwan (Equatorial dial), Baekgak-hwan (Ring with one hundred-interval quarters), Gyuhyeong (Sighting aliadade), Yongju (Dragon-pillar) and Bu (Stand). So-ganui was used conveniently portable surveying as well as astronomical instrument and possible to determine time during day and night.
Determination of near and far field acoustics for advanced propeller configurations
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Jaeger, S. M.; Kim, J. H.
1989-01-01
A method has been studied for predicting the acoustic field of the SR-3 transonic propfan using flow data generated by two versions of the NASPROP-E computer code. Since the flow fields calculated by the solvers include the shock-wave system of the propeller, the nonlinear quadrupole noise source term is included along with the monopole and dipole noise sources in the calculation of the acoustic near field. Acoustic time histories in the near field are determined by transforming the azimuthal coordinate in the rotating, blade-fixed coordinate system to the time coordinate in a nonrotating coordinate system. Fourier analysis of the pressure time histories is used to obtain the frequency spectra of the near-field noise.
Investigations of interpolation errors of angle encoders for high precision angle metrology
NASA Astrophysics Data System (ADS)
Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa
2018-06-01
Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.
Pearce, Mark A
2015-08-01
EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.
The natural neighbor series manuals and source codes
NASA Astrophysics Data System (ADS)
Watson, Dave
1999-05-01
This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.
NASA Astrophysics Data System (ADS)
Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.
2017-12-01
The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.
Calibration of Safecast dose rate measurements.
Cervone, Guido; Hultquist, Carolynne
2018-10-01
A methodology is presented to calibrate contributed Safecast dose rate measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the U.S. government and contributed datasets at specific temporal windows and at corresponding spatial locations. The coefficients found for all the different temporal windows are aggregated and interpolated using quadratic regressions to generate a time dependent calibration function. Normal background radiation, decay rates, and missing values are taken into account during the analysis. Results show that the standard Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing magnitudes with time. A model is created to predict the ratio of the isotopes from the time of the accident through 2020. The proposed time dependent calibration takes into account this Cesium isotopes ratio, and it is shown to reduce the error between U.S. government and contributed data. The proposed calibration is needed through 2020, after which date the errors introduced by ignoring the presence of different isotopes will become negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.
$X(3873$ and $Y(4140)$ using diquark-antidiquark operators with lattice QCD
Padmanath, M.; Lang, C. B.; Prelovsek Komelj, Sasa
2015-08-01
We perform a lattice study of charmonium-like mesons withmore » $$J^{PC}=1^{++}$$ and three quark contents $$\\bar cc \\bar du$$, $$\\bar cc(\\bar uu+\\bar dd)$$ and $$\\bar cc \\bar ss$$, where the later two can mix with $$\\bar cc$$. This simulation with $$N_f=2$$ and $$m_\\pi=266$$ MeV aims at the possible signatures of four-quark exotic states. We utilize a large basis of $$\\bar cc$$, two-meson and diquark-antidiquark interpolating fields, with diquarks in both anti-triplet and sextet color representations. A lattice candidate for X(3872) with I=0 is observed very close to the experimental state only if both $$\\bar cc$$ and $$D\\bar D^*$$ interpolators are included; the candidate is not found if diquark-antidiquark and $$D\\bar D^*$$ are used in the absence of $$\\bar cc$$. No candidate for neutral or charged X(3872), or any other exotic candidates are found in the I=1 channel. We also do not find signatures of exotic $$\\bar cc\\bar ss$$ candidates below 4.3 GeV, such as Y(4140). Possible physics and methodology related reasons for that are discussed. Along the way, we present the diquark-antidiquark operators as linear combinations of the two-meson operators via the Fierz transformations.« less
Custom large scale integrated circuits for spaceborne SAR processors
NASA Technical Reports Server (NTRS)
Tyree, V. C.
1978-01-01
The application of modern LSI technology to the development of a time-domain azimuth correlator for SAR processing is discussed. General design requirements for azimuth correlators for missions such as SEASAT-A, Venus orbital imaging radar (VOIR), and shuttle imaging radar (SIR) are summarized. Several azimuth correlator architectures that are suitable for implementation using custom LSI devices are described. Technical factors pertaining to selection of appropriate LSI technologies are discussed, and the maturity of alternative technologies for spacecraft applications are reported in the context of expected space mission launch dates. The preliminary design of a custom LSI time-domain azimuth correlator device (ACD) being developed for use in future SAR processors is detailed.
Effect of ambiguities on SAR picture quality
NASA Technical Reports Server (NTRS)
Korwar, V. N.; Lipes, R. G.
1978-01-01
The degradation of picture quality in a high-resolution, large-swath SAR mapping system caused by speckle, additive white Gaussian noise and range and azimuthal ambiguities occurring because of the nonfinite antenna pattern produced by a square aperture antenna was studied and simulated. The effect of the azimuth antenna pattern was accounted for by calculating the azimuth ambiguity function. Range ambiguities were accounted for by adding, to each pixel of interest, appropriate pixels at a range separation corresponding to one pulse repetition period, but attenuated by the antenna pattern. It is concluded that azimuth ambiguities do not cause any noticeable degradation (for large time bandwidth product systems, at least) but range ambiguities might.
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Equatorial disc and dawn-dusk currents in the frontside magnetosphere of Jupiter - Pioneer 10 and 11
NASA Technical Reports Server (NTRS)
Jones, D. E.; Thomas, B. T.; Melville, J. G., II
1981-01-01
Observations by Pioneer 10 and 11 show that the strongest azimuthal fields are observed near the dawn meridian (Pioneer 10) while the weakest occur near the noon meridian (Pioneer 11), suggesting a strong local time dependence for the corresponding radial current system. Modeling studies of the radial component of the field observed by both spacecraft suggest that the corresponding azimuthal current system must also be a strong function of local time. Both the azimuthal and the radial field component signatures exhibit sharp dips and reversals, requiring thin radial and azimuthal current systems. There is also a suggestion that these two current systems either are interacting or are due, at least in part, to the same current. It is suggested that a plausible current model consists of the superposition of a thin, local-time-independent azimuthal current system plus the equatorial portion of a tail-like current system that extends into the dayside magnetosphere.
Flame: A Flexible Data Reduction Pipeline for Near-Infrared and Optical Spectroscopy
NASA Astrophysics Data System (ADS)
Belli, Sirio; Contursi, Alessandra; Davies, Richard I.
2018-05-01
We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (λ, γ). This transformation consists in the polynomial functions λ(x, y) and γ(x, y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength-calibrate and rectify the data using just one interpolation step. Furthermore, the γ(x, y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).
A remark on the sign change of the four-particle azimuthal cumulant in small systems
NASA Astrophysics Data System (ADS)
Bzdak, Adam; Ma, Guo-Liang
2018-06-01
The azimuthal cumulants, c2 { 2 } and c2 { 4 }, originating from the global conservation of transverse momentum in the presence of hydro-like elliptic flow are calculated. We observe the sign change of c2 { 4 } for small number of produced particles. This is in a qualitative agreement with the recent ATLAS measurement of multi-particle azimuthal correlations with the subevent cumulant method.
Pseudospectral Model for Hybrid PIC Hall-effect Thruster Simulation
2015-07-01
and Fernandez6 (hybrid- PIC ). This work follows the example of Lam and Fernandez but substitutes a spectral description in the azimuthal direction to...Paper 3. DATES COVERED (From - To) July 2015-July 2015 4. TITLE AND SUBTITLE Pseudospectral model for hybrid PIC Hall-effect thruster simulationect...of a pseudospectral azimuthal-axial hybrid- PIC HET code which is designed to explicitly resolve and filter azimuthal fluctuations in the
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Stimmel, G. L.
1971-01-01
A simplified method is described for determining the position of the lunar roving vehicle on the lunar surface during Apollo 15. The method is based upon sun compass azimuth measurements of three lunar landmarks. The difference between the landmark azimuth and the sun azimuth is measured and the resulting data are voice relayed to the Mission Control Center for processing.
Active and Passive Remote Sensing of Ice
1991-11-15
To demonstrate the use of polarimetry in passive remote sensing of azimuthally asymmetric features on a terrain surface, an experiment was designed...azimuthal asymmetry on the remotely sensed soil surface. It is also observed from the experiment that the brightness temperatures for all three Stokes...significant implication of this experiment is that the surface asymmetry can be detected with a measurement of U at a single azimuthal angle. -8
Range and azimuth resolution enhancement for 94 GHz real-beam radar
NASA Astrophysics Data System (ADS)
Liu, Guoqing; Yang, Ken; Sykora, Brian; Salha, Imad
2008-04-01
In this paper, two-dimensional (2D) (range and azimuth) resolution enhancement is investigated for millimeter wave (mmW) real-beam radar (RBR) with linear or non-linear antenna scan in the azimuth dimension. We design a new architecture of super resolution processing, in which a dual-mode approach is used for defining region of interest for 2D resolution enhancement and a combined approach is deployed for obtaining accurate location and amplitude estimations of targets within the region of interest. To achieve 2D resolution enhancement, we first adopt the Capon Beamformer (CB) approach (also known as the minimum variance method (MVM)) to enhance range resolution. A generalized CB (GCB) approach is then applied to azimuth dimension for azimuth resolution enhancement. The GCB approach does not rely on whether the azimuth sampling is even or not and thus can be used in both linear and non-linear antenna scanning modes. The effectiveness of the resolution enhancement is demonstrated by using both simulation and test data. The results of using a 94 GHz real-beam frequency modulation continuous wave (FMCW) radar data show that the overall image quality is significantly improved per visual evaluation and comparison with respect to the original real-beam radar image.
Fast digital zooming system using directionally adaptive image interpolation and restoration.
Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki
2014-01-01
This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.
Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Niu, Xiamu
2016-01-01
This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.
Curiosity's Mars Hand Lens Imager (MAHLI) Investigation
Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter
2012-01-01
The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.
Comparison of measured and modeled BRDF of natural targets
NASA Astrophysics Data System (ADS)
Boucher, Yannick; Cosnefroy, Helene; Petit, Alain D.; Serrot, Gerard; Briottet, Xavier
1999-07-01
The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.
Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy
Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro
2016-01-01
The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, S.; Paschal, C.B.; Galloway, R.L.
Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less
Center for Space Telemetering and Telecommunications Systems, New Mexico State University
NASA Technical Reports Server (NTRS)
Horan, Stephen; DeLeon, Phillip; Borah, Deva; Lyman, Ray
2002-01-01
This viewgraph presentation gives an overview of the Center for Space Telemetering and Telecommunications Systems activities at New Mexico State University. Presentations cover the following topics: (1) small satellite communications, including nanosatellite radio and virtual satellite development; (2) modulation and detection studies, including details on smooth phase interpolated keying (SPIK) spectra and highlights of an adaptive turbo multiuser detector; (3) decoupled approaches to nonlinear ISI compensation; (4) space internet testing; (4) optical communication; (5) Linux-based receiver for lightweight optical communications without a laser in space, including software design, performance analysis, and the receiver algorithm; (6) carrier tracking hardware; and (7) subband transforms for adaptive direct sequence spread spectrum receivers.
Plasma myelin basic protein assay using Gilford enzyme immunoassay cuvettes.
Groome, N P
1981-10-01
The assay of myelin basic protein in body fluids has potential clinical importance as a routine indicator of demyelination. Preliminary details of a competitive enzyme immunoassay for this protein have previously been published by the author (Groome, N. P. (1980) J. Neurochem. 35, 1409-1417). The present paper now describes the adaptation of this assay for use on human plasma and various aspects of routine data processing. A commercially available cuvette system was found to have advantages over microtitre plates but required a permuted arrangement of sample replicates for consistent results. For dose interpolation, the standard curve could be fitted to a three parameter non-linear equation by regression analysis or linearised by the logit/log transformation.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Application of velocity filtering to optical-flow passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1992-01-01
The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals
Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.
2016-01-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Tracing the origin of azimuthal gluon correlations in the color glass condensate
NASA Astrophysics Data System (ADS)
Lappi, T.; Schenke, B.; Schlichting, S.; Venugopalan, R.
2016-01-01
We examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. We will show how a recently introduced color field domain model that captures key features of the observed azimuthal correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.
Spatial interpolation techniques using R
Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...
2013-01-01
Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569
Survey: interpolation methods for whole slide image processing.
Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T
2017-02-01
Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Zheng, Jingjing; Frisch, Michael J
2017-12-12
An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.
Analysis of the numerical differentiation formulas of functions with large gradients
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.
2017-10-01
The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
NASA Astrophysics Data System (ADS)
Huang, Q.; Schmerr, N. C.; Waszek, L.; Beghein, C.; Weidner, E. C.
2017-12-01
Mantle transition zone (MTZ) is delineated by the 410 and 660 km discontinuities and plays an important role in mantle convection. Mineral physics experiments predict that wadsleyite and ringwoodite can have 13% and 2% single-crystal anisotropy respectively, indicating that seismic anisotropy is likely to exist in the upper part of the MTZ when MTZ minerals are aligned by mantle flow (e.g. subducting slabs). Here we use the SS precursors to study the topography change and seismic anisotropy in the vicinity of MTZ discontinuities. An up-to-date SS precursor dataset consisting of 45,624 records was collected to investigate MTZ topography and anisotropy. We stacked the whole dataset into 9 geographical caps to obtain the global topography of 410 and 660 km discontinuities. The MTZ is thickened by 15 km beneath subduction zones (e.g. Japan and South America) and also thinned by 15 km beneath mantle plume regions (e.g. Bowie and Iceland hotspots), which is consistent with thermal heterogeneity in the mid-mantle. We identify four locations with sufficient bounce point density and azimuthal coverage of SS precursors to study azimuthal anisotropy in MTZ; the central Pacific, the northwest Pacific, Greenland and the central Atlantic. We stack the data by the azimuth of SS bounce points falling within the range of 2000 km in these four locations. The goal is to detect the azimuthal dependence of travel time and amplitude of SS precursors, thus to constrain azimuthal anisotropy in MTZ. The central Pacific bin has fast direction at 110° for both S410S and S660S azimuthal stacks, which is interpreted as seismic anisotropy in the overlying upper mantle. We also stack data in subduction zones by the relative azimuths of bounce points compared to mantle flow directions to test the hypothesis that subducting slabs can cause azimuthal anisotropy in MTZ. A trench-parallel fast direction is observed for both S410S and S660S travel times and amplitudes, but not for their differential travel times. This indicates that subducting slabs impart azimuthal anisotropy right above 410 discontinuity, but detectable anisotropy does not extend into the MTZ. We will present results from 3D synthetic modeling based on SPECFEM3D software to further interrogate the effects of anisotropic structures on the waveforms of the SS precursors.
Hopson, R.F.; Hillhouse, J.W.; Howard, K.A.
2008-01-01
Analysis of the strikes of 3841 dikes in 47 domains in the 500-km-long Late Jurassic Independence dike swarm indicates a distribution that is skewed clockwise from the dominant northwest strike. Independence dike swarm azimuths tend to cluster near 325?? ?? 30??, consistent with initial subparallel intrusion along much of the swarm. Dike azimuths in a quarter of the domains vary widely from the dominant trend. In domains in the essentially unrotated Sierra Nevada block, mean dike azimuths range mostly between 300?? and 320??, with the exception of Mount Goddard (247??). Mean dike azimuths in domains in the Basin and Range Province in the Argus, Inyo, and White Mountains areas range from 291?? to 354?? the mean is 004?? in the El Paso Mountains. In the Mojave Desert, mean dike azimuths range from 318?? to 023??, and in the eastern Transverse Ranges, they range from 316?? to 051??. Restoration for late Cenozoic vertical-axis rotations, suggested by paleodeclinations determined from published studies from nearby Miocene and younger rocks, shifts dike azimuths into better agreement with azimuths measured in the tectonically stable Sierra Nevada. This confirms that vertical-axis tectonic rotations explain some of the dispersion in orientation, especially in the Mojave Desert and eastern Transverse Ranges, and that the dike orientations can be a useful if imperfect guide to tectonic rotations where paleomagnetic data do not exist. Large deviations from the main trend of the swarm may reflect (1) clockwise rotations for which there is no paleomagnetic evidence available, (2) dike intrusions of other ages, (3) crack filling at angles oblique or perpendicular to the main swarm, (4) pre-Miocene rotations, or (5) unrecognized domain boundaries between dike localities and sites with paleomagnetic determinations. ?? 2008 The Geological Society of America.
Carlson, D.
2010-01-01
Joints within unconsolidated material such as glacial till can be primary avenues for the flow of electrical charge, water, and contaminants. To facilitate the siting and design of remediation programs, a need exists to map anisotropic distribution of such pathways within glacial tills by determining the azimuth of the dominant joint set. The azimuthal survey method uses standard resistivity equipment with a Wenner array rotated about a fixed center point at selected degree intervals that yields an apparent resistivity ellipse. From this ellipse, joint set orientation can be determined. Azimuthal surveys were conducted at 21 sites in a 500-km2 (193 mi2) area around Milwaukee, Wisconsin, and more specifically, at sites having more than 30 m (98 ft) of glacial till (to minimize the influence of underlying bedrock joints). The 26 azimuthal surveys revealed a systematic pattern to the trend of the dominant joint set within the tills, which is approximately parallel to ice flow direction during till deposition. The average orientation of the joint set parallel with the ice flow direction is N77??E and N37??E for the Oak Creek and Ozaukee tills, respectively. The mean difference between average direct observation of joint set orientations and average azimuthal resistivity results is 8??, which is one fifth of the difference of ice flow direction between the Ozaukee and Oak Creek tills. The results of this study suggest that the surface azimuthal electrical resistivity survey method used for local in situ studies can be a useful noninvasive method for delineating joint sets within shallow geologic material for regional studies. Copyright ?? 2010 The American Association of Petroleum Geologists/Division of Environmental Geosciences. All rights reserved.
NASA Astrophysics Data System (ADS)
Ojo, Adebayo Oluwaseun; Ni, Sidao; Chen, Haopeng; Xie, Jun
2018-01-01
To understand the depth variation of deformation beneath Cameroon, West Africa, we developed a new 3D model of S-wave isotropic velocity and azimuthal anisotropy from joint analysis of ambient seismic noise and earthquake surface wave dispersion. We found that the Cameroon Volcanic Line (CVL) is well delineated by slow phase velocities in contrast with the neighboring Congo Craton, in agreement with previous studies. Apart from the Congo Craton and the Oubanguides Belt, the uppermost mantle revealed a relatively slow velocity indicating a thinned or thermally altered lithosphere. The direction of fast axis in the upper crust is mostly NE-SW, but trending approximately N-S around Mt. Oku and the southern CVL. The observed crustal azimuthal anisotropy is attributed to alignment of cracks and crustal deformation related to magmatic activities. A widespread zone of weak-to-zero azimuthal anisotropy in the mid-lower crust shows evidence for vertical mantle flow or isotropic mid-lower crust. In the uppermost mantle, the fast axis direction changed from NE-SW to NW-SE around Mt. Oku and northern Cameroon. This suggests a layered mechanism of deformation and revealed that the mantle lithosphere has been deformed. NE-SW fast azimuths are observed beneath the Congo Craton and are consistent with the absolute motion of the African plate, suggesting a mantle origin for the observed azimuthal anisotropy. Our tomographically derived fast directions are consistent with the local SKS splitting results in some locations and depths, enabling us to constrain the origin of the observed splitting. The different feature of azimuthal anisotropy in the upper crust and the uppermost mantle implies decoupling between deformation of crust and mantle in Cameroon.
Digital x-ray tomosynthesis with interpolated projection data for thin slab objects
NASA Astrophysics Data System (ADS)
Ha, S.; Yun, J.; Kim, H. K.
2017-11-01
In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2017-11-01
Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
Application of Lagrangian blending functions for grid generation around airplane geometries
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.
1990-01-01
A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances
NASA Astrophysics Data System (ADS)
Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.
2018-04-01
Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.
Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.
Prochazka, Ivan; Panek, Petr
2009-07-01
A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escobar, D.; Ahedo, E., E-mail: eduardo.ahedo@uc3m.es
2015-10-15
The linear stability of the Hall thruster discharge is analysed against axial-azimuthal perturbations in the low frequency range using a time-dependent 2D code of the discharge. This azimuthal stability analysis is spatially global, as opposed to the more common local stability analyses, already afforded previously (D. Escobar and E. Ahedo, Phys. Plasmas 21(4), 043505 (2014)). The study covers both axial and axial-azimuthal oscillations, known as breathing mode and spoke, respectively. The influence on the spoke instability of different operation parameters such as discharge voltage, mass flow, and thruster size is assessed by means of different parametric variations and compared againstmore » experimental results. Additionally, simplified models are used to unveil and characterize the mechanisms driving the spoke. The results indicate that the spoke is linked to azimuthal oscillations of the ionization process and to the Bohm condition in the transition to the anode sheath. Finally, results obtained from local and global stability analyses are compared in order to explain the discrepancies between both methods.« less
Spatial interpolation of monthly mean air temperature data for Latvia
NASA Astrophysics Data System (ADS)
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
A Modified Direct-Reading Azimuth Protractor
ERIC Educational Resources Information Center
Larson, William C.; Pugliese, Joseph M.
1977-01-01
Describes the construction of a direct-reading azimuth protractor (DRAP) used for mapping fracture and joint-surface orientations in underground mines where magnetic disturbances affect typical geologic pocket transit. (SL)
NASA Astrophysics Data System (ADS)
Aznavourian, Ronald; Puvirajesinghe, Tania M.; Brûlé, Stéphane; Enoch, Stefan; Guenneau, Sébastien
2017-11-01
We begin with a brief historical survey of discoveries of quasi-crystals and graphene, and then introduce the concept of transformation crystallography, which consists of the application of geometric transforms to periodic structures. We consider motifs with three-fold, four-fold and six-fold symmetries according to the crystallographic restriction theorem. Furthermore, we define motifs with five-fold symmetry such as quasi-crystals generated by a cut-and-projection method from periodic structures in higher-dimensional space. We analyze elastic wave propagation in the transformed crystals and (Penrose-type) quasi-crystals with the finite difference time domain freeware SimSonic. We consider geometric transforms underpinning the design of seismic cloaks with square, circular, elliptical and peanut shapes in the context of honeycomb crystals that can be viewed as scaled-up versions of graphene. Interestingly, the use of morphing techniques leads to the design of cloaks with interpolated geometries reminiscent of Victor Vasarely’s artwork. Employing the case of transformed graphene-like (honeycomb) structures allows one to draw useful analogies between large-scale seismic metamaterials such as soils structured with columns of concrete or grout with soil and nanoscale biochemical metamaterials. We further identify similarities in designs of cloaks for elastodynamic and hydrodynamic waves and cloaks for diffusion (heat or mass) processes, as these are underpinned by geometric transforms. Experimental data extracted from field test analysis of soil structured with boreholes demonstrates the application of crystallography to large scale phononic crystals, coined as seismic metamaterials, as they might exhibit low frequency stop bands. This brings us to the outlook of mechanical metamaterials, with control of phonon emission in graphene through extreme anisotropy, attenuation of vibrations of suspension bridges via low frequency stop bands and the concept of transformed meta-cities. We conclude that these novel materials hold strong applications spanning different disciplines or across different scales from biophysics to geophysics.
Azimuthal Directivity of Fan Tones Containing Multiple Modes
NASA Technical Reports Server (NTRS)
Heidelberg, Laurence J.; Sutliff, Daniel L.; Nallasamy, M.
1997-01-01
The directivity of fan tone noise is generally measured and plotted in the sideline or flyover plane and it is assumed that this curve is the same for all azimuthal angles. When two or more circumferential (m-order) modes of the same tone are present in the fan duct, an interference pattern develops in the azimuthal direction both in the duct and in the farfield. In this investigation two m-order modes of similar power were generated in a large low speed fan. Farfield measurements and a finite element propagation code both show substantial variations in the azimuthal direction. Induct mode measurement were made and used as input to the code. Although these tests may represent a worst case scenario, the validity of the current practice of assuming axisymmetry should be questioned.
Study on plasma sheath and plasma transport properties in the azimuthator
NASA Astrophysics Data System (ADS)
Zhenyu, WANG; Binhao, JIANG; N, A. STROKIN; A, N. STUPIN
2018-04-01
A physical model of transport in an azimuthator channel with the sheath effect resulting from the interaction between the plasma and insulation wall is established in this paper. Particle in cell simulation is carried out by the model and results show that, besides the transport due to classical and Bohm diffusions, the sheath effect can significantly influences the transport in the channel. As a result, the ion density is larger than the electron density at the exit of azimuthator, and the non-neutral plasma jet is divergent, which is unfavorable for mass separation. Then, in order to improve performance of the azimuthator, a cathode is designed to emit electrons. Experiment results have demonstrated that the auxiliary cathode can obviously compensate the space charge in the plasma.
Topological States in Partially-PT -Symmetric Azimuthal Potentials
NASA Astrophysics Data System (ADS)
Kartashov, Yaroslav V.; Konotop, Vladimir V.; Torner, Lluis
2015-11-01
We introduce partially-parity-time (p PT ) -symmetric azimuthal potentials composed from individual PT -symmetric cells located on a ring, where two azimuthal directions are nonequivalent in a sense that in such potential excitations carrying topological dislocations exhibit different dynamics for different directions of energy circulation in the initial field distribution. Such nonconservative ratchetlike structures support rich families of stable vortex solitons in cubic nonlinear media, whose properties depend on the sign of the topological charge due to the nonequivalence of azimuthal directions. In contrast, oppositely charged vortex solitons remain equivalent in similar fully-P T -symmetric potentials. The vortex solitons in the p P T - and P T -symmetric potentials are shown to feature qualitatively different internal current distributions, which are described by different discrete rotation symmetries of the intensity profiles.
Azimuthally Anisotropic 3D Velocity Continuation
Burnett, William; Fomel, Sergey
2011-01-01
We extend time-domain velocity continuation to the zero-offset 3D azimuthally anisotropic case. Velocity continuation describes how a seismic image changes given a change in migration velocity. This description turns out to be of a wave propagation process, in which images change along a velocity axis. In the anisotropic case, the velocity model is multiparameter. Therefore, anisotropic image propagation is multidimensional. We use a three-parameter slowness model, which is related to azimuthal variations in velocity, as well as their principal directions. This information is useful for fracture and reservoir characterization from seismic data. We provide synthetic diffraction imaging examples to illustratemore » the concept and potential applications of azimuthal velocity continuation and to analyze the impulse response of the 3D velocity continuation operator.« less
Revised Pacific-Antarctic plate motions and geophysics of the Menard Fracture Zone
NASA Astrophysics Data System (ADS)
Croon, Marcel B.; Cande, Steven C.; Stock, Joann M.
2008-07-01
A reconnaissance survey of multibeam bathymetry and magnetic anomaly data of the Menard Fracture Zone allows for significant refinement of plate motion history of the South Pacific over the last 44 million years. The right-stepping Menard Fracture Zone developed at the northern end of the Pacific-Antarctic Ridge within a propagating rift system that generated the Hudson microplate and formed the conjugate Henry and Hudson Troughs as a response to a major plate reorganization ˜45 million years ago. Two splays, originally about 30 to 35 km apart, narrowed gradually to a corridor of 5 to 10 km width, while lineation azimuths experienced an 8° counterclockwise reorientation owing to changes in spreading direction between chrons C13o and C6C (33 to 24 million years ago). We use the improved Pacific-Antarctic plate motions to analyze the development of the southwest end of the Pacific-Antarctic Ridge. Owing to a 45° counterclockwise reorientation between chrons C27 and C20 (61 to 44 million years ago) this section of the ridge became a long transform fault connected to the Macquarie Triple Junction. Following a clockwise change starting around chron C13o (33 million years ago), the transform fault opened. A counterclockwise change starting around chron C10y (28 millions years ago) again led to a long transform fault between chrons C6C and C5y (24 to 10 million years ago). A second period of clockwise reorientation starting around chron C5y (10 million years ago) put the transform fault into extension, forming an array of 15 en echelon transform faults and short linking spreading centers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J; Qi, H; Wu, S
Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less
Applications of Lagrangian blending functions for grid generation around airplane geometries
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.
1990-01-01
A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.
Dumitru, Adrian; Lappi, Tuomas; Skokov, Vladimir
2015-12-17
In this study, we determine the distribution of linearly polarized gluons of a dense target at small x by solving the Balitsky–Jalilian-Marian–Iancu–McLerran–Weigert–Leonidov–Kovner rapidity evolution equations. From these solutions, we estimate the amplitude of cos2Φ azimuthal asymmetries in deep inelastic scattering dijet production at high energies. We find sizable long-range in rapidity azimuthal asymmetries with a magnitude in the range of v 2=~10%.
Ahmadivand, Arash; Gerislioglu, Burak; Pala, Nezih
2017-11-01
Here, the plasmon responses of both symmetric and antisymmetric oligomers on a conductive substrate under linear, azimuthal, and radial polarization excitations are analyzed numerically. By observing charge transfer plasmons under cylindrical vector beam (CVB) illumination for what we believe is the first time, we show that our studies open new horizons to induce significant charge transfer plasmons and antisymmetric Fano resonance lineshapes in metallic substrate-mediated plasmonic nanoclusters under both azimuthal and radial excitation as CVBs.
NASA Technical Reports Server (NTRS)
Belskiy, S. A.; Dmitriev, B. A.; Romanov, A. M.
1975-01-01
The value of EW asymmetry and coupling coefficients at different zenith angles were measured by means of a double coincidence crossed telescope which gives an opportunity to measure simultaneously the intensity of the cosmic ray hard component at zenith angles from 0 to 84 deg in opposite azimuths. The advantages of determining the coupling coefficients by the cosmic ray azimuth effect as compared to their measurement by the latitudinal effect are discussed.
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
Origins of collectivity in small systems
NASA Astrophysics Data System (ADS)
Schenke, Björn
2017-11-01
We review recent developments in the theoretical description and understanding of multi-particle correlation measurements in collisions of small projectiles (p/d/3He) with heavy nuclei (Au, Pb) as well as proton+proton collisions. We focus on whether the physical processes responsible for the observed long range rapidity correlations and their azimuthal structure are the same in small systems as in heavy ion collisions. In the latter they are interpreted as generated by the initial spatial geometry being transformed into momentum correlations by strong final state interactions. However, explicit calculations show that also initial state momentum correlations are present and could contribute to observables in small systems. If strong final state interactions are present in small systems, recent developments show that results are sensitive to the shape of the proton and its fluctuations.
Displacements and evolution of optical vortices in edge-diffracted Laguerre-Gaussian beams
NASA Astrophysics Data System (ADS)
Bekshaev, Aleksandr; Chernykh, Aleksey; Khoroshun, Anna; Mikhaylovskaya, Lidiya
2017-05-01
Based on the Kirchhoff-Fresnel approximation, we consider the behavior of optical vortices (OV) upon propagation of diffracted Laguerre-Gaussian (LG) beams with topological charge ∣m∣ = 1, 2. Under conditions of weak diffraction perturbation (i.e. the diffraction obstacle covers only the far transverse periphery of the incident LG beam), these OVs describe almost perfect 3D spirals within the diffracted beam body, which is an impressive demonstration of the helical nature of an OV beam. The far-field OV positions within the diffracted beam cross section depend on the wavefront curvature of the incident OV beam, so that the input wavefront curvature is transformed into the output azimuthal OV rotation. The results are expected to be useful in OV metrology and OV beam diagnostics.
Sun, Minghao; He, Honghui; Zeng, Nan; Du, E; Guo, Yihong; Peng, Cheng; He, Yonghong; Ma, Hui
2014-05-10
Polarization parameters contain rich information on the micro- and macro-structure of scattering media. However, many of these parameters are sensitive to the spatial orientation of anisotropic media, and may not effectively reveal the microstructural information. In this paper, we take polarization images of different textile samples at different azimuth angles. The results demonstrate that the rotation insensitive polarization parameters from rotating linear polarization imaging and Mueller matrix transformation methods can be used to distinguish the characteristic features of different textile samples. Further examinations using both experiments and Monte Carlo simulations reveal that the residue rotation dependence in these polarization parameters is due to the oblique incidence illumination. This study shows that such rotation independent parameters are potentially capable of quantitatively classifying anisotropic samples, such as textiles or biological tissues.
Goring, Simon; Mladenoff, David J.; Cogbill, Charles; Record, Sydne; Paciorek, Christopher J.; Dietze, Michael C.; Dawson, Andria; Matthes, Jaclyn; McLachlan, Jason S.; Williams, John W.
2016-01-01
EuroAmerican land-use and its legacies have transformed forest structure and composition across the United States (US). More accurate reconstructions of historical states are critical to understanding the processes governing past, current, and future forest dynamics. Here we present new gridded (8x8km) reconstructions of pre-settlement (1800s) forest composition and structure from the upper Midwestern US (Minnesota, Wisconsin, and most of Michigan), using 19th Century Public Land Survey System (PLSS), with estimates of relative composition, above-ground biomass, stem density, and basal area for 28 tree types. This mapping is more robust than past efforts, using spatially varying correction factors to accommodate sampling design, azimuthal censoring, and biases in tree selection.
Active chiral control of GHz acoustic whispering-gallery modes
NASA Astrophysics Data System (ADS)
Mezil, Sylvain; Fujita, Kentaro; Otsuka, Paul H.; Tomoda, Motonobu; Clark, Matt; Wright, Oliver B.; Matsuda, Osamu
2017-10-01
We selectively generate chiral surface-acoustic whispering-gallery modes in the gigahertz range on a microscopic disk by means of an ultrafast time-domain technique incorporating a spatial light modulator. Active chiral control is achieved by making use of an optical pump spatial profile in the form of a semicircular arc, positioned on the sample to break the symmetry of clockwise- and counterclockwise-propagating modes. Spatiotemporal Fourier transforms of the interferometrically monitored two-dimensional acoustic fields measured to micron resolution allow individual chiral modes and their azimuthal mode order, both positive and negative, to be distinguished. In particular, for modes with 15-fold rotational symmetry, we demonstrate ultrafast chiral control of surface acoustic waves in a micro-acoustic system with picosecond temporal resolution. Applications include nondestructive testing and surface acoustic wave devices.
Combining phase-field crystal methods with a Cahn-Hilliard model for binary alloys
NASA Astrophysics Data System (ADS)
Balakrishna, Ananya Renuka; Carter, W. Craig
2018-04-01
Diffusion-induced phase transitions typically change the lattice symmetry of the host material. In battery electrodes, for example, Li ions (diffusing species) are inserted between layers in a crystalline electrode material (host). This diffusion induces lattice distortions and defect formations in the electrode. The structural changes to the lattice symmetry affect the host material's properties. Here, we propose a 2D theoretical framework that couples a Cahn-Hilliard (CH) model, which describes the composition field of a diffusing species, with a phase-field crystal (PFC) model, which describes the host-material lattice symmetry. We couple the two continuum models via coordinate transformation coefficients. We introduce the transformation coefficients in the PFC method to describe affine lattice deformations. These transformation coefficients are modeled as functions of the composition field. Using this coupled approach, we explore the effects of coarse-grained lattice symmetry and distortions on a diffusion-induced phase transition process. In this paper, we demonstrate the working of the CH-PFC model through three representative examples: First, we describe base cases with hexagonal and square symmetries for two composition fields. Next, we illustrate how the CH-PFC method interpolates lattice symmetry across a diffuse phase boundary. Finally, we compute a Cahn-Hilliard type of diffusion and model the accompanying changes to lattice symmetry during a phase transition process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Donnell, T.J.; Olson, A.J.
1981-08-01
GRAMPS, a graphics language interpreter has been developed in FORTRAN 77 to be used in conjunction with an interactive vector display list processor (Evans and Sutherland Multi-Picture-System). Several of the features of the language make it very useful and convenient for real-time scene construction, manipulation and animation. The GRAMPS language syntax allows natural interaction with scene elements as well as easy, interactive assignment of graphics input devices. GRAMPS facilitates the creation, manipulation and copying of complex nested picture structures. The language has a powerful macro feature that enables new graphics commands to be developed and incorporated interactively. Animation may bemore » achieved in GRAMPS by two different, yet mutually compatible means. Picture structures may contain framed data, which consist of a sequence of fixed objects. These structures may be displayed sequentially to give a traditional frame animation effect. In addition, transformation information on picture structures may be saved at any time in the form of new macro commands that will transform these structures from one saved state to another in a specified number of steps, yielding an interpolated transformation animation effect. An overview of the GRAMPS command structure is given and several examples of application of the language to molecular modeling and animation are presented.« less
Object Interpolation in Three Dimensions
ERIC Educational Resources Information Center
Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.
2005-01-01
Perception of objects in ordinary scenes requires interpolation processes connecting visible areas across spatial gaps. Most research has focused on 2-D displays, and models have been based on 2-D, orientation-sensitive units. The authors present a view of interpolation processes as intrinsically 3-D and producing representations of contours and…
Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.
Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik
2007-01-01
In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-01-01
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan
2017-03-01
Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.
In-situ Calibration Methods for Phased Array High Frequency Radars
NASA Astrophysics Data System (ADS)
Flament, P. J.; Flament, M.; Chavanne, C.; Flores-vidal, X.; Rodriguez, I.; Marié, L.; Hilmer, T.
2016-12-01
HF radars measure currents through the Doppler-shift of electromagnetic waves Bragg-scattered by surface gravity waves. While modern clocks and digital synthesizers yield range errors negligible compared to the bandwidth-limited range resolution, azimuth calibration issues arise for beam-forming phased arrays. Sources of errors in the phases of the received waves can be internal to the radar system (phase errors of filters, cable lengths, antenna tuning) and geophysical (standing waves, propagation and refraction anomalies). They result in azimuthal biases (which can be range-dependent) and beam-forming side-lobes (which induce Doppler ambiguities). We analyze the experimental calibrations of 17 deployments of WERA HF radars, performed between 2003 and 2012 in Hawaii, the Adriatic, France, Mexico and the Philippines. Several strategies were attempted: (i) passive reception of continuous multi-frequency transmitters on GPS-tracked boats, cars, and drones; (ii) bi-static calibrations of radars in mutual view; (iii) active echoes from vessels of opportunity of unknown positions or tracked through AIS; (iv) interference of unknown remote transmitters with the chirped local oscillator. We found that: (a) for antennas deployed on the sea shore, a single-azimuth calibration is sufficient to correct phases within a typical beam-forming azimuth range; (b) after applying this azimuth-independent correction, residual pointing errors are 1-2 deg. rms; (c) for antennas deployed on irregular cliffs or hills, back from shore, systematic biases appear for some azimuths at large incidence angles, suggesting that some of the ground-wave electromagnetic energy propagates in a terrain-following mode between the sea shore and the antennas; (d) for some sites, fluctuations of 10-25 deg. in radio phase at 20-40 deg. azimuthal period, not significantly correlated among antennas, are omnipresent in calibrations along a constant-range circle, suggesting standing waves or multiple paths in the presence of reflecting structures (buildings, fences), or possibly fractal nature of the wavefronts; (e) amplitudes lack stability in time and azimuth to be usable as a-priori calibrations, confirming the accepted method of re-normalizing amplitudes by the signal of nearby cells prior to beam-forming.
Multibeam monopulse radar for airborne sense and avoid system
NASA Astrophysics Data System (ADS)
Gorwara, Ashok; Molchanov, Pavlo
2016-10-01
The multibeam monopulse radar for Airborne Based Sense and Avoid (ABSAA) system concept is the next step in the development of passive monopulse direction finder proposed by Stephen E. Lipsky in the 80s. In the proposed system the multibeam monopulse radar with an array of directional antennas is positioned on a small aircaraft or Unmanned Aircraft System (UAS). Radar signals are simultaneously transmitted and received by multiple angle shifted directional antennas with overlapping antenna patterns and the entire sky, 360° for both horizontal and vertical coverage. Digitizing of amplitude and phase of signals in separate directional antennas relative to reference signals provides high-accuracy high-resolution range and azimuth measurement and allows to record real time amplitude and phase of reflected from non-cooperative aircraft signals. High resolution range and azimuth measurement provides minimal tracking errors in both position and velocity of non-cooperative aircraft and determined by sampling frequency of the digitizer. High speed sampling with high-accuracy processor clock provides high resolution phase/time domain measurement even for directional antennas with wide Field of View (FOV). Fourier transform (frequency domain processing) of received radar signals provides signatures and dramatically increases probability of detection for non-cooperative aircraft. Steering of transmitting power and integration, correlation period of received reflected signals for separate antennas (directions) allows dramatically decreased ground clutter for low altitude flights. An open architecture, modular construction allows the combination of a radar sensor with Automatic Dependent Surveillance - Broadcast (ADS-B), electro-optic, acoustic sensors.
NASA Astrophysics Data System (ADS)
Šprlák, Michal; Novák, Pavel
2017-02-01
New spherical integral formulas between components of the second- and third-order gravitational tensors are formulated in this article. First, we review the nomenclature and basic properties of the second- and third-order gravitational tensors. Initial points of mathematical derivations, i.e., the second- and third-order differential operators defined in the spherical local North-oriented reference frame and the analytical solutions of the gradiometric boundary-value problem, are also summarized. Secondly, we apply the third-order differential operators to the analytical solutions of the gradiometric boundary-value problem which gives 30 new integral formulas transforming (1) vertical-vertical, (2) vertical-horizontal and (3) horizontal-horizontal second-order gravitational tensor components onto their third-order counterparts. Using spherical polar coordinates related sub-integral kernels can efficiently be decomposed into azimuthal and isotropic parts. Both spectral and closed forms of the isotropic kernels are provided and their limits are investigated. Thirdly, numerical experiments are performed to test the consistency of the new integral transforms and to investigate properties of the sub-integral kernels. The new mathematical apparatus is valid for any harmonic potential field and may be exploited, e.g., when gravitational/magnetic second- and third-order tensor components become available in the future. The new integral formulas also extend the well-known Meissl diagram and enrich the theoretical apparatus of geodesy.
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
Experiments on helical modes in magnetized thin foil-plasmas
NASA Astrophysics Data System (ADS)
Yager-Elorriaga, David
2017-10-01
This paper gives an in-depth experimental study of helical features on magnetized, ultrathin foil-plasmas driven by the 1-MA linear transformer driver at University of Michigan. Three types of cylindrical liner loads were designed to produce: (a) pure magneto-hydrodynamic (MHD) modes (defined as being void of the acceleration-driven magneto-Rayleigh-Taylor instability, MRT) using a non-imploding geometry, (b) pure kink modes using a non-imploding, kink-seeded geometry, and (c) MRT-MHD coupled modes in an unseeded, imploding geometry. For each configuration, we applied relatively small axial magnetic fields of Bz = 0.2-2.0 T (compared to peak azimuthal fields of 30-40 T). The resulting liner-plasmas and instabilities were imaged using 12-frame laser shadowgraphy and visible self-emission on a fast framing camera. The azimuthal mode number was carefully identified with a tracking algorithm of self-emission minima. Our experiments show that the helical structures are a manifestation of discrete eigenmodes. The pitch angle of the helix is simply m / kR , from implosion to explosion, where m, k, and R are the azimuthal mode number, axial wavenumber, and radius of the helical instability. Thus, the pitch angle increases (decreases) during implosion (explosion) as R becomes smaller (larger). We found that there are one, or at most two, discrete helical modes that arise for magnetized liners, with no apparent threshold on the applied Bz for the appearance of helical modes; increasing the axial magnetic field from zero to 0.5 T changes the relative weight between the m = 0 and m = 1 modes. Further increasing the applied axial magnetic fields yield higher m modes. Finally, the seeded kink instability overwhelms the intrinsic instability modes of the plasma. These results are corroborated with our analytic theory on the effects of radial acceleration on the classical sausage, kink, and higher m modes. Work supported by US DOE award DE-SC0012328, Sandia National Laboratories, and the National Science Foundation. D.Y.E. was supported by NSF fellowship Grant Number DGE 1256260. The fast framing camera was supported by a DURIP, AFOSR Grant FA9550-15-1-0419.
14 CFR 171.315 - Azimuth monitor system requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... an error in the time division multiplex synchronization of a particular azimuth function that the...). If the fault is not cleared within the time allowed, the ground equipment must be shut down. After...
14 CFR 171.315 - Azimuth monitor system requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... an error in the time division multiplex synchronization of a particular azimuth function that the...). If the fault is not cleared within the time allowed, the ground equipment must be shut down. After...
14 CFR 171.315 - Azimuth monitor system requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... an error in the time division multiplex synchronization of a particular azimuth function that the...). If the fault is not cleared within the time allowed, the ground equipment must be shut down. After...
Effect of ambiguities on SAR picture quality
NASA Technical Reports Server (NTRS)
Korwar, V. N.; Lipes, R. G.
1978-01-01
The degradation of picture quality is studied for a high-resolution, large-swath SAR mapping system subjected to speckle, additive white Gaussian noise, and range and azimuthal ambiguities occurring because of the non-finite antenna pattern produced by a square aperture antenna. The effect of the azimuth antenna pattern was accounted for by calculating the aximuth ambiguity function. Range ambiguities were accounted for by adding appropriate pixels at a range separation corresponding to one pulse repetition period, but attenuated by the antenna pattern. A method of estimating the range defocussing effect which arises from the azimuth matched filter being a function of range is shown. The resulting simulated picture was compared with one degraded by speckle and noise but no ambiguities. It is concluded that azimuth ambiguities don't cause any noticeable degradation but range ambiguities might.
Modeling and Implementation of Multi-Position Non-Continuous Rotation Gyroscope North Finder.
Luo, Jun; Wang, Zhiqian; Shen, Chengwu; Kuijper, Arjan; Wen, Zhuoman; Liu, Shaojin
2016-09-20
Even when the Global Positioning System (GPS) signal is blocked, a rate gyroscope (gyro) north finder is capable of providing the required azimuth reference information to a certain extent. In order to measure the azimuth between the observer and the north direction very accurately, we propose a multi-position non-continuous rotation gyro north finding scheme. Our new generalized mathematical model analyzes the elements that affect the azimuth measurement precision and can thus provide high precision azimuth reference information. Based on the gyro's principle of detecting a projection of the earth rotation rate on its sensitive axis and the proposed north finding scheme, we are able to deduct an accurate mathematical model of the gyro outputs against azimuth with the gyro and shaft misalignments. Combining the gyro outputs model and the theory of propagation of uncertainty, some approaches to optimize north finding are provided, including reducing the gyro bias error, constraining the gyro random error, increasing the number of rotation points, improving rotation angle measurement precision, decreasing the gyro and the shaft misalignment angles. According them, a north finder setup is built and the azimuth uncertainty of 18" is obtained. This paper provides systematic theory for analyzing the details of the gyro north finder scheme from simulation to implementation. The proposed theory can guide both applied researchers in academia and advanced practitioners in industry for designing high precision robust north finder based on different types of rate gyroscopes.
Lineament Azimuths on Europa: Implications for Evolution of the Europan Ice Shell
NASA Astrophysics Data System (ADS)
Kachingwe, M.; Rhoden, A.; Lekic, V.; Hurford, T., Jr.; Henning, W. G.
2016-12-01
Tectonic activity on Europa has been linked to tidal stress caused by its eccentric orbit, finite obliquity, and possibly non-synchronous rotation of the icy shell. Cycloids and other lineaments are thought to form in response to tidal normal stress while strike-slip motion along preexisting faults has been attributed to tidal shear stress. Tectonic features can thus provide constraints on the rotational parameters that govern tidal stress and insight into the tidal-tectonic processes operating on ice-covered ocean bodies. Past lineament azimuth predictions based on stress models accounting for either spin pole precession or longitude translation yielded distributions that varied with location on Europa (e.g. Hurford, 2005; Fig. 16 of Rhoden and Hurford, 2013). Until now, these predicted azimuths have only been tested on a few spatially restricted regions. Additionally, these predictions were made using a thin shell approximation, which neglects the viscoelastic response of Europa's ice shell. Here, we present new measurements of lineament azimuths across geographically diverse regions of Europa, focusing on locations where lineament azimuths have never before been measured but which have been imaged at better than 250 km/pixel resolution. We focus on lineaments that do not exhibit substantial curvature, and we quantify deviations in azimuth observed along each lineament. We quantitatively compare the observed distributions against published predictions as well as new predictions made with a viscoelastic tidal stress model. These results have implications for Europa's interior and the evolution of tidal stress over time.
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
Catmull-Rom Curve Fitting and Interpolation Equations
ERIC Educational Resources Information Center
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
High-Fidelity Real-Time Trajectory Optimization for Reusable Launch Vehicles
2006-12-01
6.20 Max DR Yawing Moment History. ...............................................................270 Figure 6.21 Snapshot from MATLAB “Profile...Propagation using “ode45” (Euler Angles)...........................................330 Figure 6.114 Interpolated Elevon Controls using Various MATLAB ...Schemes.................332 Figure 6.115 Interpolated Flap Controls using Various MATLAB Schemes.....................333 Figure 6.116 Interpolated
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
Reducing Interpolation Artifacts for Mutual Information Based Image Registration
Soleimani, H.; Khosravifard, M.A.
2011-01-01
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673
Mittag, U.; Kriechbaumer, A.; Rittweger, J.
2017-01-01
The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415
High accurate interpolation of NURBS tool path for CNC machine tools
NASA Astrophysics Data System (ADS)
Liu, Qiang; Liu, Huan; Yuan, Songmei
2016-09-01
Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.
NASA Astrophysics Data System (ADS)
Chang, C.; Sun, L.; Lin, C.; Chang, Y.; Tseng, P.
2013-12-01
The existence of fractures not only provides spaces for the residence of oils and gases reside, but it also creates pathways for migration. Characterizing a fractured reservoir thus becomes an important subject and has been widely studied by exploration geophysicists and drilling engineers. In seismic anisotropy, a reservoir of systematically aligned vertical fractures (SAVF) is often treated as a transversely isotropic medium (TIM) with a horizontal axis of symmetry (HTI). Subjecting to HTI, physical properties vary in azimuth. P-wave reflection amplitude, which is susceptible to vary in azimuth, is one of the most popular seismic attributes which is widely used to delineate the fracture strike of an SAVF reservoir. Instead of going further on analyzing P-wave signatures, in this study, we focused on evaluating the feasibility of orienting the fracture strike of an SAVF reservoir using converted (C-) wave amplitude. For a C-wave is initiated by a downward traveling P-wave that is converted on reflection to an upcoming S-wave; the behaviors of both P- and S-waves should be theoretically woven in a C-wave. In our laboratory work, finite offset reflection experiments were carried out on the azimuthal plane of a HTI model at two different offset intervals. To demonstrate the azimuthal variation of C-wave amplitude in a HTI model, reflections were acquired along the principal symmetry directions and the diagonal direction of the HTI model. Inheriting from phenomenon of S-wave splitting in a transversely isotropic medium (TIM), P-waves get converted into both the fast (S1) and slow (S2) shear modes at all azimuths outside the vertical symmetry planes, thus producing split PS-waves (PS1 and PS2). In our laboratory data, the converted PS1- (C1-) wave were observed and identified. As the azimuth varies from the strike direction to the strike normal, C1-wave amplitude exhibits itself in a way of weakening and can be view from the common-reflection-point (CRP) gathers. Therefore, in conjunction with the azimuthal velocity and the amplitude variations in the P-wave and the azimuthal polarization of the S-wave, the azimuthal variation of C-wave amplitude which is experimentally demonstrated could be considered as a valuable seismic attribute in orienting the fracture strike of a SAVF reservoir. (Key words: converted wave, transversely isotropic medium, physical modeling, amplitude, fracture)
NASA Astrophysics Data System (ADS)
Kudo, M.; Ueno, I.; Shiomi, J.; Amberg, G.; Kawamura, H.
Under microgravity condition, themocapillarity dominates in material processing. In a half-zone method, two co-axial cylindrical rods hold a liquid bridge by the surface tension. By adding a temperature difference Δ T between the rods, thermocapillary flow is induced in the bridge. The convection changes from two-dimensional steady flow to three-dimensional oscillatory one at a critical Δ T in the case of medium to high Prandtl number (Pr) fluid. In our latest study (Shiomi et al., JFM, 2003), complete damping of the temperature oscillation was not achieved at highly nonlinear regions by a simple cancellation scheme. The excitation of unexpected other azimuthal wave numbers prevented the suppression of the oscillation. The present study aimed to develop a new control scheme with taking into account of spatio-temporal azimuthal temperature distribution. The target geometry was a liquid bridge of 5 mm in diameter and of a unit aspect ratio, Γ g(g= H/R=1, where H and R are the height and the radius of the bridge, respectively). At this aspect ratio, a dominant azimuthal mode was wave number of 2 when the control was absent. Silicone oil of 5 cSt (Pr = 68 at 25C) was employed as a test fluid. The flow field was visualized by suspending polystyrene sphere particles (D =17μ m). The present experiments were performed with 4 sensors located at different azimuthal positions for the evaluation of the azimuthal surface temperature distribution as well as with 2 heaters to suppress its non-uniform distribution. All sensors and heaters were located at the mid-height of the bridge. The present algorithm involved two main features; the first one was the time-dependent estimation of the azimuthal surface temperature distribution at the height of the sensors and heaters. Evaluation of the azimuthal temperature distribution enabled us to cancel the temperature oscillation by local heating effectively. The second one was the time-dependent evaluation of a frequency of the dominant mode number. This scheme enabled us to predict the azimuthal temperature distribution properly. The control was applied to a highly nonlinear flow that exhibited a traveling-wave type oscillatory flow (traveling flow) in the absence of the control. Under the control, the amplitude of temperature measured by each sensor attenuated significantly. The flow visualization exhibited a gradual change of the flow structure from the traveling down to the standing flow with less nonlinearity. We realized the reduction of the amplitude less than half of the initial value without amplifying other azimuthal-wave-number oscillations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Miyajo, Akira; Hasegawa, Hideyuki
2018-07-01
At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Quadratic trigonometric B-spline for image interpolation using GA
Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906
Quadratic trigonometric B-spline for image interpolation using GA.
Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.
Learning the dynamics of objects by optimal functional interpolation.
Ahn, Jong-Hoon; Kim, In Young
2012-09-01
Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.
Patch-based frame interpolation for old films via the guidance of motion paths
NASA Astrophysics Data System (ADS)
Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi
2018-04-01
Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.
Interpolation of property-values between electron numbers is inconsistent with ensemble averaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.
2016-06-28
In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less
Shape Control in Multivariate Barycentric Rational Interpolation
NASA Astrophysics Data System (ADS)
Nguyen, Hoa Thang; Cuyt, Annie; Celis, Oliver Salazar
2010-09-01
The most stable formula for a rational interpolant for use on a finite interval is the barycentric form [1, 2]. A simple choice of the barycentric weights ensures the absence of (unwanted) poles on the real line [3]. In [4] we indicate that a more refined choice of the weights in barycentric rational interpolation can guarantee comonotonicity and coconvexity of the rational interpolant in addition to a polefree region of interest. In this presentation we generalize the above to the multivariate case. We use a product-like form of univariate barycentric rational interpolants and indicate how the location of the poles and the shape of the function can be controlled. This functionality is of importance in the construction of mathematical models that need to express a certain trend, such as in probability distributions, economics, population dynamics, tumor growth models etc.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
NASA Astrophysics Data System (ADS)
Tirani, M. D.; Maleki, M.; Kajani, M. T.
2014-11-01
A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.
Multiphase-field model of small strain elasto-plasticity according to the mechanical jump conditions
NASA Astrophysics Data System (ADS)
Herrmann, Christoph; Schoof, Ephraim; Schneider, Daniel; Schwab, Felix; Reiter, Andreas; Selzer, Michael; Nestler, Britta
2018-04-01
We introduce a small strain elasto-plastic multiphase-field model according to the mechanical jump conditions. A rate-independent J_2 -plasticity model with linear isotropic hardening and without kinematic hardening is applied exemplary. Generally, any physically nonlinear mechanical model is compatible with the subsequently presented procedure. In contrast to models with interpolated material parameters, the proposed model is able to apply different nonlinear mechanical constitutive equations for each phase separately. The Hadamard compatibility condition and the static force balance are employed as homogenization approaches to calculate the phase-inherent stresses and strains. Several verification cases are discussed. The applicability of the proposed model is demonstrated by simulations of the martensitic transformation and quantitative parameters.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.