Sample records for point-source harmonic interpolation

  1. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  2. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  3. A novel power harmonic analysis method based on Nuttall-Kaiser combination window double spectrum interpolated FFT algorithm

    NASA Astrophysics Data System (ADS)

    Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.

    2017-11-01

    Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.

  4. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  5. Nearly arc-length tool path generation and tool radius compensation algorithm research in FTS turning

    NASA Astrophysics Data System (ADS)

    Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao

    2014-08-01

    In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.

  6. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    NASA Astrophysics Data System (ADS)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  7. Unitary subsector of generalized minimal models

    NASA Astrophysics Data System (ADS)

    Behan, Connor

    2018-05-01

    We revisit the line of nonunitary theories that interpolate between the Virasoro minimal models. Numerical bootstrap applications have brought about interest in the four-point function involving the scalar primary of lowest dimension. Using recent progress in harmonic analysis on the conformal group, we prove the conjecture that global conformal blocks in this correlator appear with positive coefficients. We also compute many such coefficients in the simplest mixed correlator system. Finally, we comment on the status of using global conformal blocks to isolate the truly unitary points on this line.

  8. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    NASA Astrophysics Data System (ADS)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  9. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    PubMed

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  10. An approach for spherical harmonic analysis of non-smooth data

    NASA Astrophysics Data System (ADS)

    Wang, Hansheng; Wu, Patrick; Wang, Zhiyong

    2006-12-01

    A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.

  11. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  12. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  13. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  14. Interpolating precipitation and its relation to runoff and non-point source pollution.

    PubMed

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  15. Resolving the structure of the Galactic foreground using Herschel measurements and the Kriging technique

    NASA Astrophysics Data System (ADS)

    Pinter, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Zahorecz, S.; Tóth, L. V.

    2018-05-01

    Investigating the distant extragalactic Universe requires a subtraction of the Galactic foreground. One of the major difficulties deriving the fine structure of the galactic foreground is the embedded foreground and background point sources appearing in the given fields. It is especially so in the infrared. We report our study subtracting point sources from Herschel images with Kriging, an interpolation method where the interpolated values are modelled by a Gaussian process governed by prior covariances. Using the Kriging method on Herschel multi-wavelength observations the structure of the Galactic foreground can be studied with much higher resolution than previously, leading to a better foreground subtraction at the end.

  16. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.

  17. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424

  18. Acoustic reciprocity: An extension to spherical harmonics domain.

    PubMed

    Samarasinghe, Prasanga; Abhayapala, Thushara D; Kellermann, Walter

    2017-10-01

    Acoustic reciprocity is a fundamental property of acoustic wavefields that is commonly used to simplify the measurement process of many practical applications. Traditionally, the reciprocity theorem is defined between a monopole point source and a point receiver. Intuitively, it must apply to more complex transducers than monopoles. In this paper, the authors formulate the acoustic reciprocity theory in the spherical harmonics domain for directional sources and directional receivers with higher order directivity patterns.

  19. Heat Flow Contours and Well Data Around the Milford FORGE Site

    DOE Data Explorer

    Joe Moore

    2016-03-09

    This submission contains a shapefile of heat flow contour lines around the FORGE site located in Milford, Utah. The model was interpolated from data points in the Milford_wells shapefile. This heat flow model was interpolated from 66 data points using the kriging method in Geostatistical Analyst tool of ArcGIS. The resulting model was smoothed 100%. The well dataset contains 59 wells from various sources, with lat/long coordinates, temperature, quality, basement depth, and heat flow. This data was used to make models of the specific characteristics.

  20. Interpolating Spherical Harmonics for Computing Antenna Patterns

    DTIC Science & Technology

    2011-07-01

    4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the

  1. Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation

    NASA Astrophysics Data System (ADS)

    Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan

    2016-08-01

    Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).

  2. A geostatistical approach to data harmonization - Application to radioactivity exposure data

    NASA Astrophysics Data System (ADS)

    Baume, O.; Skøien, J. O.; Heuvelink, G. B. M.; Pebesma, E. J.; Melles, S. J.

    2011-06-01

    Environmental issues such as air, groundwater pollution and climate change are frequently studied at spatial scales that cross boundaries between political and administrative regions. It is common for different administrations to employ different data collection methods. If these differences are not taken into account in spatial interpolation procedures then biases may appear and cause unrealistic results. The resulting maps may show misleading patterns and lead to wrong interpretations. Also, errors will propagate when these maps are used as input to environmental process models. In this paper we present and apply a geostatistical model that generalizes the universal kriging model such that it can handle heterogeneous data sources. The associated best linear unbiased estimation and prediction (BLUE and BLUP) equations are presented and it is shown that these lead to harmonized maps from which estimated biases are removed. The methodology is illustrated with an example of country bias removal in a radioactivity exposure assessment for four European countries. The application also addresses multicollinearity problems in data harmonization, which arise when both artificial bias factors and natural drifts are present and cannot easily be distinguished. Solutions for handling multicollinearity are suggested and directions for further investigations proposed.

  3. A robust interpolation procedure for producing tidal current ellipse inputs for regional and coastal ocean numerical models

    NASA Astrophysics Data System (ADS)

    Byun, Do-Seong; Hart, Deirdre E.

    2017-04-01

    Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.

  4. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  5. Artifacts Introduced in the Point Evaluation of Functions Expanded into a Degree 360 Spherical Harmonic Series

    NASA Technical Reports Server (NTRS)

    Rapp, R.

    1999-01-01

    An expansion of a function initially given in 1deg cells was carried out to degree 360 by using 30'cells whose value was initially assigned to be the value of the 1deg cell in which it fell. The evaluation of point values of the function from the degree 360 expansion revealed spurious patterns attributed to the coefficients from degree 181 to 360. Expansion of the original function in 1deg cells to degree 180 showed no problems in the point evaluation. Mean 1deg values computed from both degree 180 to 360 expansions showed close agreement with the original function. The artifacts could be removed if the 30' values were interpolated by spline procedures from adjacent I' cells. These results led to an examination of the gravity anomalies and geoid undulations from EGM96 in areas where I' values were "split up" to form 30'cells. The area considered was 75degS to 85degS, 100degE to 120degE where the split up cells were basically south of 81 degS. A small, latitude related, and possibly spurious effect might be detectable in anomaly variations in the region. These results suggest that point values of a function computed from a high degree expansion may have spurious signals unless the cell size is compatible with the maximum degree of expansion. The spurious signals could be eliminated by using a spline interpolation procedure to obtain the 30'values from the 1deg values.

  6. Quantum effects and anharmonicity in the H2-Li+-benzene complex: A model for hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Kolmann, Stephen J.; D'Arcy, Jordan H.; Jordan, Meredith J. T.

    2013-12-01

    Quantum and anharmonic effects are investigated in H2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials. Three- and 8-dimensional quantum diffusion Monte Carlo (QDMC) and rigid-body diffusion Monte Carlo (RBDMC) simulations are performed on potential energy surfaces interpolated from electronic structure calculations at the M05-2X/6-31+G(d,p) and M05-2X/6-311+G(2df,p) levels of theory using a three-dimensional spline or a modified Shepard interpolation. These calculations investigate the intermolecular interactions in this system, with three- and 8-dimensional 0 K H2 binding enthalpy estimates, ΔHbind (0 K), being 16.5 kJ mol-1 and 12.4 kJ mol-1, respectively: 0.1 and 0.6 kJ mol-1 higher than harmonic values. Zero-point energy effects are 35% of the value of ΔHbind (0 K) at M05-2X/6-311+G(2df,p) and cannot be neglected; uncorrected electronic binding energies overestimate ΔHbind (0 K) by at least 6 kJ mol-1. Harmonic intermolecular binding enthalpies can be corrected by treating the H2 "helicopter" and "ferris wheel" rotations as free and hindered rotations, respectively. These simple corrections yield results within 2% of the 8-dimensional anharmonic calculations. Nuclear ground state probability density histograms obtained from the QDMC and RBDMC simulations indicate the H2 molecule is delocalized above the Li+-benzene system at 0 K.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolmann, Stephen J.; D'Arcy, Jordan H.; Jordan, Meredith J. T., E-mail: m.jordan@chem.usyd.edu.au

    Quantum and anharmonic effects are investigated in H{sub 2}-Li{sup +}-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials. Three- and 8-dimensional quantum diffusion Monte Carlo (QDMC) and rigid-body diffusion Monte Carlo (RBDMC) simulations are performed on potential energy surfaces interpolated from electronic structure calculations at the M05-2X/6-31+G(d,p) and M05-2X/6-311+G(2df,p) levels of theory using a three-dimensional spline or a modified Shepard interpolation. These calculations investigate the intermolecular interactions in this system, with three- and 8-dimensional 0 K H{sub 2} binding enthalpy estimates, ΔH{sub bind} (0 K), being 16.5 kJ mol{sup −1} and 12.4 kJ mol{sup −1}, respectively: 0.1 and 0.6more » kJ mol{sup −1} higher than harmonic values. Zero-point energy effects are 35% of the value of ΔH{sub bind} (0 K) at M05-2X/6-311+G(2df,p) and cannot be neglected; uncorrected electronic binding energies overestimate ΔH{sub bind} (0 K) by at least 6 kJ mol{sup −1}. Harmonic intermolecular binding enthalpies can be corrected by treating the H{sub 2} “helicopter” and “ferris wheel” rotations as free and hindered rotations, respectively. These simple corrections yield results within 2% of the 8-dimensional anharmonic calculations. Nuclear ground state probability density histograms obtained from the QDMC and RBDMC simulations indicate the H{sub 2} molecule is delocalized above the Li{sup +}-benzene system at 0 K.« less

  8. Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations

    NASA Technical Reports Server (NTRS)

    Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.

    1993-01-01

    The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.

  9. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  10. Method for determining size of inhomogeneity localization region based on analysis of secondary wave field of second harmonic

    NASA Astrophysics Data System (ADS)

    Chernov, N. N.; Zagray, N. P.; Laguta, M. V.; Varenikova, A. Yu

    2018-05-01

    The article describes the research of the method of localization and determining the size of heterogeneity in biological tissues. The equation for the acoustic harmonic wave, which propagates in the positive direction, is taken as the main one. A three-dimensional expression that describes the field of secondary sources at the observation point is obtained. The simulation of the change of the amplitude values of the vibrational velocity of the second harmonic of the acoustic wave at different coordinates of the inhomogeneity location in three-dimensional space is carried out. For the convenience of mathematical calculations, the area of heterogeneity is reduced to a point.

  11. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  12. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  13. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  14. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  15. Areal and Temporal Analysis of Precipitation Patterns In Slovakia Using Spectral Analysis

    NASA Astrophysics Data System (ADS)

    Pishvaei, M. R.

    Harmonic analysis as an objective method of precipitation seasonality studying is ap- plied to the 1901-2000 monthly precipitation averages at five stations in the low-land part of Slovakia with elevation less than 800 m a.s.l. The significant harmonics of long-term precipitation series have been separately computed for eight 30-year peri- ods, which cover the 20th century and some properties and the variations are com- pared to 100-year monthly precipitation averages. The selected results show that the first and the second harmonics pre-dominantly influence on the annual distribution and climatic seasonal regimes of pre-cipitation that contribute to the precipitation am- plitude/pattern with about 20% and 10%, respectively. These indicate annual and half year variations. The rest harmon-ics often have each less than 5% contribution on the Fourier interpolation course. Maximum in yearly precipitation course, which oc- curs approximately at the begin-ning of July, because of phase changing shifts then to the middle of June. Some probable reasons regarding to Fourier components are discussed. In addition, a tem-poral analysis over precipitation time series belonging to the Hurbanovo Observa-tory as the longest observational series on the territory of Slovakia (with 130-year precipitation records) has been individually performed and possible meteorological factors responsible for the observed patterns are suggested. A comparison of annual precipitation course obtained from daily precipitation totals analysis and polynomial trends with Fourier interpolation has been done too. Daily precipitation data in the latest period are compared for some stations in Slovakia as well. Only selected results are pre-sented in the poster.

  16. SGR 1822-1606: Constant Spin Period

    NASA Astrophysics Data System (ADS)

    Serim, M.; Baykal, A.; Inam, S. C.

    2011-08-01

    We have analyzed light curve of the new source SGR 1822-1606 (Cummings et al. GCN 12159) using the real time data of RXTE observations. We have extracted light curve for 11 pointings with a time span of about 20 days and employed pulse timing analysis using the harmonic representation of pulses. Using the cross correlation of harmonic representation of pulses, we have obtained pulse arrival times.

  17. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  18. Erratum: Sources of Image Degradation in Fundamental and Harmonic Ultrasound Imaging: A Nonlinear, Full-Wave, Simulation Study

    PubMed Central

    Pinton, Gianmarco F.; Trahey, Gregg E.; Dahl, Jeremy J.

    2015-01-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain. This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-and-sum beamforming is used to generate point spread functions (PSFs) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is due to reverberation from near-field structures. Compared with fundamental imaging, reverberation clutter in harmonic imaging is 27.1 dB lower. Simulated tissue with uniform velocity but unchanged impedance characteristics indicates that for harmonic imaging, the primary source of degradation is phase aberration. PMID:21693410

  19. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  20. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  1. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    PubMed

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  2. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    NASA Astrophysics Data System (ADS)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  3. Interpolation by fast Wigner transform for rapid calculations of magnetic resonance spectra from powders.

    PubMed

    Stevensson, Baltzar; Edén, Mattias

    2011-03-28

    We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.

  4. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  5. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  6. Comparison of sEMG processing methods during whole-body vibration exercise.

    PubMed

    Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S

    2015-12-01

    The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Non-fluorescent nanoscopic monitoring of a single trapped nanoparticle via nonlinear point sources.

    PubMed

    Yoon, Seung Ju; Lee, Jungmin; Han, Sangyoon; Kim, Chang-Kyu; Ahn, Chi Won; Kim, Myung-Ki; Lee, Yong-Hee

    2018-06-07

    Detection of single nanoparticles or molecules has often relied on fluorescent schemes. However, fluorescence detection approaches limit the range of investigable nanoparticles or molecules. Here, we propose and demonstrate a non-fluorescent nanoscopic trapping and monitoring platform that can trap a single sub-5-nm particle and monitor it with a pair of floating nonlinear point sources. The resonant photon funnelling into an extremely small volume of ~5 × 5 × 7 nm 3 through the three-dimensionally tapered 5-nm-gap plasmonic nanoantenna enables the trapping of a 4-nm CdSe/ZnS quantum dot with low intensity of a 1560-nm continuous-wave laser, and the pumping of 1560-nm femtosecond laser pulses creates strong background-free second-harmonic point illumination sources at the two vertices of the nanoantenna. Under the stable trapping conditions, intermittent but intense nonlinear optical spikes are observed on top of the second-harmonic signal plateau, which is identified as the 3.0-Hz Kramers hopping of the quantum dot trapped in the 5-nm gap.

  8. A bivariate rational interpolation with a bi-quadratic denominator

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu

    2006-10-01

    In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.

  9. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  10. Double simple-harmonic-oscillator formulation of the thermal equilibrium of a fluid interacting with a coherent source of phonons

    NASA Technical Reports Server (NTRS)

    Defacio, B.; Vannevel, Alan; Brander, O.

    1993-01-01

    A formulation is given for a collection of phonons (sound) in a fluid at a non-zero temperature which uses the simple harmonic oscillator twice; one to give a stochastic thermal 'noise' process and the other which generates a coherent Glauber state of phonons. Simple thermodynamic observables are calculated and the acoustic two point function, 'contrast' is presented. The role of 'coherence' in an equilibrium system is clarified by these results and the simple harmonic oscillator is a key structure in both the formulation and the calculations.

  11. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  12. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  13. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  14. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  15. Singularity-driven second- and third-harmonic generation at {epsilon}-near-zero crossing points

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, M. A.; Ceglia, D. de; Ciattoni, A.

    We show an alternative path to efficient second- and third-harmonic generation in proximity of the zero crossing points of the dielectric permittivity in conjunction with low absorption. Under these circumstances, any material, either natural or artificial, will show similar degrees of field enhancement followed by strong harmonic generation, without resorting to any resonant mechanism. The results presented in this paper provide a general demonstration of the potential that the zero-crossing-point condition holds for nonlinear optical phenomena. We investigate a generic Lorentz medium and demonstrate that a singularity-driven enhancement of the electric field may be achieved even in extremely thin layersmore » of material. We also discuss the role of nonlinear surface sources in a realistic scenario where a 20-nm layer of CaF{sub 2} is excited at 21 {mu}m, where {epsilon}{approx} 0. Finally, we show similar behavior in an artificial composite material that includes absorbing dyes in the visible range, provide a general tool for the improvement of harmonic generation using the {epsilon}{approx} 0 condition, and illustrate that this singularity-driven enhancement of the field lowers the thresholds for a plethora of nonlinear optical phenomena.« less

  16. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  17. Numerical study of time domain analogy applied to noise prediction from rotating blades

    NASA Astrophysics Data System (ADS)

    Fedala, D.; Kouidri, S.; Rey, R.

    2009-04-01

    Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.

  18. Parametric Identification of Nonlinear Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Feeny, Brian

    2002-01-01

    In this project, we looked at the application of harmonic balancing as a tool for identifying parameters (HBID) in a nonlinear dynamical systems with chaotic responses. The main idea is to balance the harmonics of periodic orbits extracted from measurements of each coordinate during a chaotic response. The periodic orbits are taken to be approximate solutions to the differential equations that model the system, the form of the differential equations being known, but with unknown parameters to be identified. Below we summarize the main points addressed in this work. The details of the work are attached as drafts of papers, and a thesis, in the appendix. Our study involved the following three parts: (1) Application of the harmonic balance to a simulation case in which the differential equation model has known form for its nonlinear terms, in contrast to a differential equation model which has either power series or interpolating functions to represent the nonlinear terms. We chose a pendulum, which has sinusoidal nonlinearities; (2) Application of the harmonic balance to an experimental system with known nonlinear forms. We chose a double pendulum, for which chaotic response were easily generated. Thus we confronted a two-degree-of-freedom system, which brought forth challenging issues; (3) A study of alternative reconstruction methods. The reconstruction of the phase space is necessary for the extraction of periodic orbits from the chaotic responses, which is needed in this work. Also, characterization of a nonlinear system is done in the reconstructed phase space. Such characterizations are needed to compare models with experiments. Finally, some nonlinear prediction methods can be applied in the reconstructed phase space. We developed two reconstruction methods that may be considered if the common method (method of delays) is not applicable.

  19. The NASA/MSFC global reference atmospheric model: MOD 3 (with spherical harmonic wind model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Fletcher, G. R.; Gramling, F. E.; Pace, W. B.

    1980-01-01

    Improvements to the global reference atmospheric model are described. The basic model includes monthly mean values of pressure, density, temperature, and geostrophic winds, as well as quasi-biennial and small and large scale random perturbations. A spherical harmonic wind model for the 25 to 90 km height range is included. Below 25 km and above 90 km, the GRAM program uses the geostrophic wind equations and pressure data to compute the mean wind. In the altitudes where the geostrophic wind relations are used, an interpolation scheme is employed for estimating winds at low latitudes where the geostrophic wind relations being to mesh down. Several sample wind profiles are given, as computed by the spherical harmonic model. User and programmer manuals are presented.

  20. Restoring method for missing data of spatial structural stress monitoring based on correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  1. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  2. Temporal and frequency characteristics of a narrow light beam in sea water.

    PubMed

    Luchinin, Alexander G; Kirillin, Mikhail Yu

    2016-09-20

    The structure of a light field in sea water excited by a unidirectional point-sized pulsed source is studied by Monte Carlo technique. The pulse shape registered at the distances up to 120 m from the source on the beam axis and in its axial region is calculated with a time resolution of 1 ps. It is shown that with the increase of the distance from the source the pulse splits into two parts formed by components of various scattering orders. Frequency and phase responses of the beam are calculated by means of the fast Fourier transform. It is also shown that for higher frequencies, the attenuation of harmonic components of the field is larger. In the range of parameters corresponding to pulse splitting on the beam axis, the attenuation of harmonic components in particular spectral ranges exceeds the attenuation predicted by Bouguer law. In this case, the transverse distribution of the amplitudes of these harmonics is minimal on the beam axis.

  3. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  4. On the interpolation of volumetric water content in research catchments

    NASA Astrophysics Data System (ADS)

    Dlamini, Phesheya; Chaplot, Vincent

    Digital Soil Mapping (DSM) is widely used in the environmental sciences because of its accuracy and efficiency in producing soil maps compared to the traditional soil mapping. Numerous studies have investigated how the sampling density and the interpolation process of data points affect the prediction quality. While, the interpolation process is straight forward for primary attributes such as soil gravimetric water content (θg) and soil bulk density (ρb), the DSM of volumetric water content (θv), the product of θg by ρb, may either involve direct interpolations of θv (approach 1) or independent interpolation of ρb and θg data points and subsequent multiplication of ρb and θg maps (approach 2). The main objective of this study was to compare the accuracy of these two mapping approaches for θv. A 23 ha grassland catchment in KwaZulu-Natal, South Africa was selected for this study. A total of 317 data points were randomly selected and sampled during the dry season in the topsoil (0-0.05 m) for θg by ρb estimation. Data points were interpolated following approaches 1 and 2, and using inverse distance weighting with 3 or 12 neighboring points (IDW3; IDW12), regular spline with tension (RST) and ordinary kriging (OK). Based on an independent validation set of 70 data points, OK was the best interpolator for ρb (mean absolute error, MAE of 0.081 g cm-3), while θg was best estimated using IDW12 (MAE = 1.697%) and θv by IDW3 (MAE = 1.814%). It was found that approach 1 underestimated θv. Approach 2 tended to overestimate θv, but reduced the prediction bias by an average of 37% and only improved the prediction accuracy by 1.3% compared to approach 1. Such a great benefit of approach 2 (i.e., the subsequent multiplication of interpolated maps of primary variables) was unexpected considering that a higher sampling density (∼14 data point ha-1 in the present study) tends to minimize the differences between interpolations techniques and approaches. In the context of much lower sampling densities, as generally encountered in environmental studies, one can thus expect approach 2 to yield significantly greater accuracy than approach 1. This approach 2 seems promising and can be further tested for DSM of other secondary variables.

  5. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  6. Simulation of 100-300 GHz solid-state harmonic sources

    NASA Technical Reports Server (NTRS)

    Zybura, Michael F.; Jones, J. Robert; Jones, Stephen H.; Tait, Gregory B.

    1995-01-01

    Accurate and efficient simulations of the large-signal time-dependent characteristics of second-harmonic Transferred Electron Oscillators (TEO's) and Heterostructure Barrier Varactor (HBV) frequency triplers have been obtained. This is accomplished by using a novel and efficient harmonic-balance circuit analysis technique which facilitates the integration of physics-based hydrodynamic device simulators. The integrated hydrodynamic device/harmonic-balance circuit simulators allow TEO and HBV circuits to be co-designed from both a device and a circuit point of view. Comparisons have been made with published experimental data for both TEO's and HBV's. For TEO's, excellent correlation has been obtained at 140 GHz and 188 GHz in second-harmonic operation. Excellent correlation has also been obtained for HBV frequency triplers operating near 200 GHz. For HBV's, both a lumped quasi-static equivalent circuit model and the hydrodynamic device simulator have been linked to the harmonic-balance circuit simulator. This comparison illustrates the importance of representing active devices with physics-based numerical device models rather than analytical device models.

  7. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  8. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  9. Demonstration of Johnson noise thermometry with all-superconducting quantum voltage noise source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp; Urano, Chiharu; Maezawa, Masaaki

    We present a Johnson noise thermometry (JNT) system based on an integrated quantum voltage noise source (IQVNS) that has been fully implemented using superconducting circuit technology. To enable precise measurement of Boltzmann's constant, an IQVNS chip was designed to produce intrinsically calculable pseudo-white noise to calibrate the JNT system. On-chip real-time generation of pseudo-random codes via simple circuits produced pseudo-voltage noise with a harmonic tone interval of less than 1 Hz, which was one order of magnitude finer than the harmonic tone interval of conventional quantum voltage noise sources. We estimated a value for Boltzmann's constant experimentally by performing JNT measurementsmore » at the temperature of the triple point of water using the IQVNS chip.« less

  10. Spatial interpolation techniques using R

    EPA Science Inventory

    Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...

  11. Tracking Helicopters with a Seismic Array

    NASA Astrophysics Data System (ADS)

    Eibl, Eva P. S.; Lokmer, Ivan; Bean, Christopher J.; Akerlie, Eggert

    2015-04-01

    We observed that the pressure or acoustic wave created by the rotor blades of a helicopter can couple to the ground even at 30 km distance where it creates a signal strong enough to be detected by a seismometer. The signal is harmonic tremor with a fundamental frequency downgliding with the inflection point at e.g. 14 Hz and two equally spaced overtones up to the Nyquist frequency of 50 Hz. No difference in the amplitudes between the fundamental frequency and higher harmonics was observed. Such a signature is a consequence of the regularly repeating pressure pulses generated by the helicopter's rotor blades. The signal was recorded by a seven station broadband array with an aperture of 1.6 km. Our spacing is close enough to record the signal at all stations and far enough to observe traveltime differences. The separation of the spectral lines corresponds to the time interval between the repeating sources. The highlighted harmonics contain information about the spectral content of the single source as our signal corresponds to the convolution of an infinite comb function and a single pulse. As we see all harmonics and they have the same amplitude up to the Nyquist frequency we can deduce that the frequency content of the single pulse is flat i.e. it is effectively a delta function up to the Nyquist frequency. We perform a detailed spectral and location analysis of the signal, and compare our results with the known information on the helicopter's speed, location, the frequency of the blades rotation and the amount of blades. This analysis is based on the characteristic shape of the curve i.e. speed of the gliding, minimum and maximum fundamental frequency, amplitudes at the inflection points at different stations and traveltimes deduced from the inflection points at different stations. This observation has an educative value, because the same principle could be used for the analysis of the volcanic harmonic tremor. Harmonic volcanic tremor usually has fundamental frequencies below 10 Hz but frequency downgliding and upgliding up to 30 Hz was observed e.g. on Redoubt volcano. Due to the characteristic shape of the helicopter signal it is nevertheless rather unlikely that this signal is mistaken for volcanic tremor. The helicopter gives us a robust way of testing the method and possible application of the method to volcanic harmonic tremor.

  12. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  13. A rational interpolation method to compute frequency response

    NASA Technical Reports Server (NTRS)

    Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

    1993-01-01

    A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

  14. An objective isobaric/isentropic technique for upper air analysis

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Endlich, R. M.; Ehernberger, L. J.

    1981-01-01

    An objective meteorological analysis technique is presented whereby both horizontal and vertical upper air analyses are performed. The process used to interpolate grid-point values from the upper-air station data is the same as for grid points on both an isobaric surface and a vertical cross-sectional plane. The nearby data surrounding each grid point are used in the interpolation by means of an anisotropic weighting scheme, which is described. The interpolation for a grid-point potential temperature is performed isobarically; whereas wind, mixing-ratio, and pressure height values are interpolated from data that lie on the isentropic surface that passes through the grid point. Two versions (A and B) of the technique are evaluated by qualitatively comparing computer analyses with subjective handdrawn analyses. The objective products of version A generally have fair correspondence with the subjective analyses and with the station data, and depicted the structure of the upper fronts, tropopauses, and jet streams fairly well. The version B objective products correspond more closely to the subjective analyses, and show the same strong gradients across the upper front with only minor smoothing.

  15. Pricing and simulation for real estate index options: Radial basis point interpolation

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  16. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  17. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  18. Elementary Theoretical Forms for the Spatial Power Spectrum of Earth's Crustal Magnetic Field

    NASA Technical Reports Server (NTRS)

    Voorhies, C.

    1998-01-01

    The magnetic field produced by magnetization in Earth's crust and lithosphere can be distinguished from the field produced by electric currents in Earth's core because the spatial magnetic power spectrum of the crustal field differs from that of the core field. Theoretical forms for the spectrum of the crustal field are derived by treating each magnetic domain in the crust as the point source of a dipole field. The geologic null-hypothesis that such moments are uncorrelated is used to obtain the magnetic spectrum expected from a randomly magnetized, or unstructured, spherical crust of negligible thickness. This simplest spectral form is modified to allow for uniform crustal thickness, ellipsoidality, and the polarization of domains by an periodically reversing, geocentric axial dipole field from Earth's core. Such spectra are intended to describe the background crustal field. Magnetic anomalies due to correlated magnetization within coherent geologic structures may well be superimposed upon this background; yet representing each such anomaly with a single point dipole may lead to similar spectral forms. Results from attempts to fit these forms to observational spectra, determined via spherical harmonic analysis of MAGSAT data, are summarized in terms of amplitude, source depth, and misfit. Each theoretical spectrum reduces to a source factor multiplied by the usual exponential function of spherical harmonic degree n due to geometric attenuation with attitude above the source layer. The source factors always vary with n and are approximately proportional to n(exp 3) for degrees 12 through 120. The theoretical spectra are therefore not directly proportional to an exponential function of spherical harmonic degree n. There is no radius at which these spectra are flat, level, or otherwise independent of n.

  19. Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

    USGS Publications Warehouse

    Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.

    2011-01-01

    Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.

  20. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  1. Separation of Main and Tail Rotor Noise Sources from Ground-Based Acoustic Measurements Using Time-Domain De-Dopplerization

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric II; Schmitz, Fredric H.

    2009-01-01

    A new method of separating the contributions of helicopter main and tail rotor noise sources is presented, making use of ground-based acoustic measurements. The method employs time-domain de-Dopplerization to transform the acoustic pressure time-history data collected from an array of ground-based microphones to the equivalent time-history signals observed by an array of virtual inflight microphones traveling with the helicopter. The now-stationary signals observed by the virtual microphones are then periodically averaged with the main and tail rotor once per revolution triggers. The averaging process suppresses noise which is not periodic with the respective rotor, allowing for the separation of main and tail rotor pressure time-histories. The averaged measurements are then interpolated across the range of directivity angles captured by the microphone array in order to generate separate acoustic hemispheres for the main and tail rotor noise sources. The new method is successfully applied to ground-based microphone measurements of a Bell 206B3 helicopter and demonstrates the strong directivity characteristics of harmonic noise radiation from both the main and tail rotors of that helicopter.

  2. Mode matching in multiresonant plasmonic nanoantennas for enhanced second harmonic generation

    NASA Astrophysics Data System (ADS)

    Celebrano, Michele; Wu, Xiaofei; Baselli, Milena; Großmann, Swen; Biagioni, Paolo; Locatelli, Andrea; de Angelis, Costantino; Cerullo, Giulio; Osellame, Roberto; Hecht, Bert; Duò, Lamberto; Ciccacci, Franco; Finazzi, Marco

    2015-05-01

    Boosting nonlinear frequency conversion in extremely confined volumes remains a challenge in nano-optics research, but can enable applications in nanomedicine, photocatalysis and background-free biosensing. To obtain brighter nonlinear nanoscale sources, approaches that enhance the electromagnetic field intensity and counter the lack of phase matching in nanoplasmonic systems are often employed. However, the high degree of symmetry in the crystalline structure of plasmonic materials (metals in particular) and in nanoantenna designs strongly quenches second harmonic generation. Here, we describe doubly-resonant single-crystalline gold nanostructures with no axial symmetry displaying spatial mode overlap at both the excitation and second harmonic wavelengths. The combination of these features allows the attainment of a nonlinear coefficient for second harmonic generation of ˜5 × 10-10 W-1, enabling a second harmonic photon yield higher than 3 × 106 photons per second. Theoretical estimations point toward the use of our nonlinear plasmonic nanoantennas as efficient platforms for label-free molecular sensing.

  3. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  4. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  5. Evaluation of interpolation methods for TG-43 dosimetric parameters based on comparison with Monte Carlo data for high-energy brachytherapy sources.

    PubMed

    Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark

    2010-03-01

    The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.

  6. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  7. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.

  8. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management

    NASA Astrophysics Data System (ADS)

    Yuval; Rimon, Y.; Graber, E. R.; Furman, A.

    2013-07-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli shoreline.

  9. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  10. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  11. Gravity fields of the solar system

    NASA Technical Reports Server (NTRS)

    Zendell, A.; Brown, R. D.; Vincent, S.

    1975-01-01

    The most frequently used formulations of the gravitational field are discussed and a standard set of models for the gravity fields of the earth, moon, sun, and other massive bodies in the solar system are defined. The formulas are presented in standard forms, some with instructions for conversion. A point-source or inverse-square model, which represents the external potential of a spherically symmetrical mass distribution by a mathematical point mass without physical dimensions, is considered. An oblate spheroid model is presented, accompanied by an introduction to zonal harmonics. This spheroid model is generalized and forms the basis for a number of the spherical harmonic models which were developed for the earth and moon. The triaxial ellipsoid model is also presented. These models and their application to space missions are discussed.

  12. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  13. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  14. Interpolation of the Extended Boolean Retrieval Model.

    ERIC Educational Resources Information Center

    Zanger, Daniel Z.

    2002-01-01

    Presents an interpolation theorem for an extended Boolean information retrieval model. Results show that whenever two or more documents are similarly ranked at any two points for a query containing exactly two terms, then they are similarly ranked at all points in between; and that results can fail for queries with more than two terms. (Author/LRW)

  15. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  16. Generation of GHS Scores from TEST and online sources ...

    EPA Pesticide Factsheets

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat toxicity, developmental toxicity, endocrine activity, and mutagenicity. It can be used to evaluate ecotoxicity (in terms of acute fathead minnow toxicity) and fate (in terms of bioconcentration factor). It also be used to estimate a variety of key physicochemical properties such as melting point, boiling point, vapor pressure, water solubility, and bioconcentration factor. A web-based version of T.E.S.T. is currently being developed to allow predictions to be made from other web tools. Online data sources such as from NCCT’s Chemistry Dashboard, REACH dossiers, or from ChemHat.org can also be utilized to obtain GHS (Global Harmonization System) scores for comparing alternatives. The purpose of this talk is to show how GHS (Global Harmonization Score) data can be obtained from literature sources and from T.E.S.T. (Toxicity Estimation Software Tool). This data will be used to compare chemical alternatives in the alternatives assessment dashboard (a 2018 CSS product).

  17. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  18. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  19. The MV model of the color glass condensate for a finite number of sources including Coulomb interactions

    DOE PAGES

    McLerran, Larry; Skokov, Vladimir V.

    2016-09-19

    We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.

  20. A general method for generating bathymetric data for hydrodynamic computer models

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  1. [Gas pipeline leak detection based on tunable diode laser absorption spectroscopy].

    PubMed

    Zhang, Qi-Xing; Wang, Jin-Jun; Liu, Bing-Hai; Cai, Ting-Li; Qiao, Li-Feng; Zhang, Yong-Ming

    2009-08-01

    The principle of tunable diode laser absorption spectroscopy and harmonic detection technique was introduced. An experimental device was developed by point sampling through small multi-reflection gas cell. A specific line near 1 653. 7 nm was targeted for methane measurement using a distributed feedback diode laser as tunable light source. The linearity between the intensity of second harmonic signal and the concentration of methane was determined. The background content of methane in air was measured. The results show that gas sensors using tunable diode lasers provide a high sensitivity and high selectivity method for city gas pipeline leak detection.

  2. Exact Harmonic Metric for a Uniformly Moving Schwarzschild Black Hole

    NASA Astrophysics Data System (ADS)

    He, Guan-Sheng; Lin, Wen-Bin

    2014-02-01

    The harmonic metric for Schwarzschild black hole with a uniform velocity is presented. In the limit of weak field and low velocity, this metric reduces to the post-Newtonian approximation for one moving point mass. As an application, we derive the dynamics of particle and photon in the weak-field limit for the moving Schwarzschild black hole with an arbitrary velocity. It is found that the relativistic motion of gravitational source can induce an additional centripetal force on the test particle, which may be comparable to or even larger than the conventional Newtonian gravitational force.

  3. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  4. Elliptic Cylinder Airborne Sampling and Geostatistical Mass Balance Approach for Quantifying Local Greenhouse Gas Emissions.

    PubMed

    Tadić, Jovan M; Michalak, Anna M; Iraci, Laura; Ilić, Velibor; Biraud, Sébastien C; Feldman, Daniel R; Bui, Thaopaul; Johnson, Matthew S; Loewenstein, Max; Jeong, Seongeun; Fischer, Marc L; Yates, Emma L; Ryoo, Ju-Mee

    2017-09-05

    In this study, we explore observational, experimental, methodological, and practical aspects of the flux quantification of greenhouse gases from local point sources by using in situ airborne observations, and suggest a series of conceptual changes to improve flux estimates. We address the major sources of uncertainty reported in previous studies by modifying (1) the shape of the typical flight path, (2) the modeling of covariance and anisotropy, and (3) the type of interpolation tools used. We show that a cylindrical flight profile offers considerable advantages compared to traditional profiles collected as curtains, although this new approach brings with it the need for a more comprehensive subsequent analysis. The proposed flight pattern design does not require prior knowledge of wind direction and allows for the derivation of an ad hoc empirical correction factor to partially alleviate errors resulting from interpolation and measurement inaccuracies. The modified approach is applied to a use-case for quantifying CH 4 emission from an oil field south of San Ardo, CA, and compared to a bottom-up CH 4 emission estimate.

  5. Experimental Nonlinear Dynamics and Snap-Through of Post-Buckled Thin Laminated Composite Plates

    NASA Astrophysics Data System (ADS)

    Kim, Han-Gyu

    Modern aerospace systems are increasingly being designed with composite panels and plates to achieve light weight and high specific strength and stiffness. For constrained panels, thermally-induced axial loading may cause buckling of the structure, which can lead to nonlinear and potentially chaotic behavior. When post-buckled composite plates experience snap-through, they are subjected to large-amplitude deformations and in-plane compressive loading. These phenomena pose a potential threat to the structural integrity of composite structures. In this work, the nonlinear dynamic behavior of post-buckled composite plates was investigated experimentally and computationally. For the experimental work, an electrodynamic shaker was used to apply harmonic loads and the dynamic response of plate specimens was measured using a single-point displacement-sensing laser, a double-point laser vibrometer (velocity-sensing), and a set of digital image correlation cameras. Both chaotic and periodic steady-state snap-through behaviors were investigated. The experimental data were used to characterize snap-through behaviors of the post-buckled specimens and their boundaries in the harmonic forcing parameter space. The nonlinear behavior of post-buckled plates was modeled using the classical laminated plate theory (CLPT) and the von Karman strain-displacement relations. The static equilibrium paths of the post-buckled plates were analyzed using an arc-length method with a branch-switching technique. For the dynamic analysis, the nonlinear equations of motion were derived based on CLPT and the nonlinear finite element model of the equations was constructed using the Hermite cubic interpolation functions for both conforming and nonconforming elements. The numerical analyses were conducted using the model and were compared with the experimental data.

  6. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  7. Modified Perfect Harmonics Cancellation Control of a Grid Interfaced SPV Power Generation

    NASA Astrophysics Data System (ADS)

    Singh, B.; Shahani, D. T.; Verma, A. K.

    2015-03-01

    This paper deals with a grid interfaced solar photo voltaic (SPV) power generating system with modified perfect harmonic cancellation (MPHC) control for power quality improvement in terms of mitigation of the current harmonics, power factor correction, control of point of common coupling (PCC) voltage with reactive power compensation and load balancing in a three phase distribution system. The proposed grid interfaced SPV system consists of a SPV array, a dc-dc boost converter and a voltage source converter (VSC) used for the compensation of other connected linear and nonlinear loads at PCC. The reference grid currents are estimated using MPHC method and control signals are derived by using pulse width modulation (PWM) current controller of VSC. The SPV power is fed to the common dc bus of VSC and dc-dc boost converter using maximum power point tracking (MPPT). The dc link voltage of VSC is regulated by using dc voltage proportional integral (PI) controller. The analysis of the proposed SPV power generating system is carried out under dc/ac short circuit and severe SPV-SX and SPV-TX intrusion.

  8. Large-scale kinetic energy spectra from Eulerian analysis of EOLE wind data

    NASA Technical Reports Server (NTRS)

    Desbois, M.

    1975-01-01

    A data set of 56,000 winds determined from the horizontal displacements of EOLE balloons at the 200 mb level in the Southern Hemisphere during the period October 1971-February 1972 is utilized for the computation of planetary- and synoptic-scale kinetic energy space spectra. However, the random distribution of measurements in space and time presents some problems for the spectral analysis. Two different approaches are used, i.e., a harmonic analysis of daily wind values at equi-distant points obtained by space-time interpolation of the data, and a correlation method using the direct measurements. Both methods give similar results for small wavenumbers, but the second is more accurate for higher wavenumbers (k above or equal to 10). The spectra show a maximum at wavenumbers 5 and 6 due to baroclinic instability and then decrease for high wavenumbers up to wavenumber 35 (which is the limit of the analysis), according to the inverse power law k to the negative p, with p close to 3.

  9. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  10. Specific features of the flow structure in a reactive type turbine stage

    NASA Astrophysics Data System (ADS)

    Chernikov, V. A.; Semakina, E. Yu.

    2017-04-01

    The results of experimental studies of the gas dynamics for a reactive type turbine stage are presented. The objective of the studies is the measurement of the 3D flow fields in reference cross sections, experimental determination of the stage characteristics, and analysis of the flow structure for detecting the sources of kinetic energy losses. The integral characteristics of the studied stage are obtained by averaging the results of traversing the 3D flow over the area of the reference cross sections before and behind the stage. The averaging is performed using the conservation equations for mass, total energy flux, angular momentum with respect to the axis z of the turbine, entropy flow, and the radial projection of the momentum flux equation. The flow parameter distributions along the channel height behind the stage are obtained in the same way. More thorough analysis of the flow structure is performed after interpolation of the experimentally measured point parameter values and 3D flow velocities behind the stage. The obtained continuous velocity distributions in the absolute and relative coordinate systems are presented in the form of vector fields. The coordinates of the centers and the vectors of secondary vortices are determined using the results of point measurements of velocity vectors in the cross section behind the turbine stage and their subsequent interpolation. The approach to analysis of experimental data on aerodynamics of the turbine stage applied in this study allows one to find the detailed space structure of the working medium flow, including secondary coherent vortices at the root and peripheral regions of the air-gas part of the stage. The measured 3D flow parameter fields and their interpolation, on the one hand, point to possible sources of increased power losses, and, on the other hand, may serve as the basis for detailed testing of CFD models of the flow using both integral and local characteristics. The comparison of the numerical and experimental results, as regards local characteristics, using statistical methods yields the quantitative estimate of their agreement.

  11. Comparison of simulated and measured nonlinear ultrasound fields

    NASA Astrophysics Data System (ADS)

    Du, Yigang; Jensen, Henrik; Jensen, Jørgen Arendt

    2011-03-01

    In this paper results from a non-linear AS (angular spectrum) based ultrasound simulation program are compared to water-tank measurements. A circular concave transducer with a diameter of 1 inch (25.4 mm) is used as the emitting source. The measured pulses are first compared with the linear simulation program Field II, which will be used to generate the source for the AS simulation. The generated non-linear ultrasound field is measured by a hydrophone in the focal plane. The second harmonic component from the measurement is compared with the AS simulation, which is used to calculate both fundamental and second harmonic fields. The focused piston transducer with a center frequency of 5 MHz is excited by a waveform generator emitting a 6-cycle sine wave. The hydrophone is mounted in the focal plane 118 mm from the transducer. The point spread functions at the focal depth from Field II and measurements are illustrated. The FWHM (full width at half maximum) values are 1.96 mm for the measurement and 1.84 mm for the Field II simulation. The fundamental and second harmonic components of the experimental results are plotted compared with the AS simulations. The RMS (root mean square) errors of the AS simulations are 7.19% and 10.3% compared with the fundamental and second harmonic components of the measurements.

  12. Research progress and hotspot analysis of spatial interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  13. Fusing Satellite-Derived Irradiance and Point Measurements through Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Lorenzo, A.; Morzfeld, M.; Holmgren, W.; Cronin, A.

    2016-12-01

    Satellite-derived irradiance is widely used throughout the design and operation of a solar power plant. While satellite-derived estimates cover a large area, they also have large errors compared to point measurements from sensors on the ground. We describe an optimal interpolation routine that fuses the broad spatial coverage of satellite-derived irradiance with the high accuracy of point measurements. The routine can be applied to any satellite-derived irradiance and point measurement datasets. Unique aspects of this work include the fact that information is spread using cloud location and thickness and that a number of point measurements are collected from rooftop PV systems. The routine is sensitive to errors in the satellite image geolocation, so care must be taken to adjust the cloud locations based on the solar and satellite geometries. Analysis of the optimal interpolation routine over Tucson, AZ, with 20 point measurements shows a significant improvement in the irradiance estimate for two distinct satellite image to irradiance algorithms. Improved irradiance estimates can be used for resource assessment, distributed generation production estimates, and irradiance forecasts.

  14. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  15. A fast simulation method for radiation maps using interpolation in a virtual environment.

    PubMed

    Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun

    2018-05-10

    In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.

  16. Adaptive Harmonic Detection Control of Grid Interfaced Solar Photovoltaic Energy System with Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Singh, B.; Goel, S.

    2015-03-01

    This paper presents a grid interfaced solar photovoltaic (SPV) energy system with a novel adaptive harmonic detection control for power quality improvement at ac mains under balanced as well as unbalanced and distorted supply conditions. The SPV energy system is capable of compensation of linear and nonlinear loads with the objectives of load balancing, harmonics elimination, power factor correction and terminal voltage regulation. The proposed control increases the utilization of PV infrastructure and brings down its effective cost due to its other benefits. The adaptive harmonic detection control algorithm is used to detect the fundamental active power component of load currents which are subsequently used for reference source currents estimation. An instantaneous symmetrical component theory is used to obtain instantaneous positive sequence point of common coupling (PCC) voltages which are used to derive inphase and quadrature phase voltage templates. The proposed grid interfaced PV energy system is modelled and simulated in MATLAB Simulink and its performance is verified under various operating conditions.

  17. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  18. Privacy-Preserving Predictive Modeling: Harmonization of Contextual Embeddings From Different Sources.

    PubMed

    Huang, Yingxiang; Lee, Junghye; Wang, Shuang; Sun, Jimeng; Liu, Hongfang; Jiang, Xiaoqian

    2018-05-16

    Data sharing has been a big challenge in biomedical informatics because of privacy concerns. Contextual embedding models have demonstrated a very strong representative capability to describe medical concepts (and their context), and they have shown promise as an alternative way to support deep-learning applications without the need to disclose original data. However, contextual embedding models acquired from individual hospitals cannot be directly combined because their embedding spaces are different, and naive pooling renders combined embeddings useless. The aim of this study was to present a novel approach to address these issues and to promote sharing representation without sharing data. Without sacrificing privacy, we also aimed to build a global model from representations learned from local private data and synchronize information from multiple sources. We propose a methodology that harmonizes different local contextual embeddings into a global model. We used Word2Vec to generate contextual embeddings from each source and Procrustes to fuse different vector models into one common space by using a list of corresponding pairs as anchor points. We performed prediction analysis with harmonized embeddings. We used sequential medical events extracted from the Medical Information Mart for Intensive Care III database to evaluate the proposed methodology in predicting the next likely diagnosis of a new patient using either structured data or unstructured data. Under different experimental scenarios, we confirmed that the global model built from harmonized local models achieves a more accurate prediction than local models and global models built from naive pooling. Such aggregation of local models using our unique harmonization can serve as the proxy for a global model, combining information from a wide range of institutions and information sources. It allows information unique to a certain hospital to become available to other sites, increasing the fluidity of information flow in health care. ©Yingxiang Huang, Junghye Lee, Shuang Wang, Jimeng Sun, Hongfang Liu, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.05.2018.

  19. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

  20. Application of Gaussian Elimination to Determine Field Components within Unmeasured Regions in the UCN τ Trap

    NASA Astrophysics Data System (ADS)

    Felkins, Joseph; Holley, Adam

    2017-09-01

    Determining the average lifetime of a neutron gives information about the fundamental parameters of interactions resulting from the charged weak current. It is also an input for calculations of the abundance of light elements in the early cosmos, which are also directly measured. Experimentalists have devised two major approaches to measure the lifespan of the neutron, the beam experiment, and the bottle experiment. For the bottle experiment, I have designed a computational algorithm based on a numerical technique that interpolates magnetic field values in between measured points. This algorithm produces interpolated fields that satisfy the Maxwell-Heaviside equations for use in a simulation that will investigate the rate of depolarization in magnetic traps used for bottle experiments, such as the UCN τ experiment at Los Alamos National Lab. I will present how UCN depolarization can cause a systematic error in experiments like UCN τ. I will then describe the technique that I use for the interpolation, and will discuss the accuracy of interpolation for changes with the number of measured points and the volume of the interpolated region. Supported by NSF Grant 1553861.

  1. Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea

    NASA Astrophysics Data System (ADS)

    Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan

    2016-04-01

    Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.

  2. Passive wide spectrum harmonic filter for adjustable speed drives in oil and gas industry

    NASA Astrophysics Data System (ADS)

    Al Jaafari, Khaled Ali

    Non-linear loads such as variable speed drives constitute the bulky load of oil and gas industry power systems. They are widely used in driving induction and permanent magnet motors for variable speed applications. That is because variable speed drives provide high static and dynamic performance. Moreover, they are known of their high energy efficiency and high motion quality, and high starting torque. However, these non-linear loads are main sources of current and voltage harmonics and lower the quality of electric power system. In fact, it is the six-pulse and twelve-pulse diode and thyristor rectifiers that spoil the AC power line with the dominant harmonics (5th, 7th, 11th). They provide DC voltage to the inverter of the variable speed drives. Typical problems that arise from these harmonics are Harmonic resonances', harmonic losses, interference with electronic equipment, and line voltage distortion at the Point of Common Coupling (PCC). Thus, it is necessary to find efficient, reliable, and economical harmonic filters. The passive filters have definite advantage over active filters in terms of components count, cost and reliability. Reliability and maintenance is a serious issue in drilling rigs which are located in offshore and onshore with extreme operating conditions. Passive filters are tuned to eliminate a certain frequency and therefore there is a need to equip the system with more than one passive filter to eliminate all unwanted frequencies. An alternative solution is Wide Spectrum Harmonic passive filter. The wide spectrum harmonic filters are becoming increasingly popular in these applications and found to overcome some of the limitations of conventional tuned passive filter. The most important feature of wide spectrum harmonic passive filters is that only one capacitor is required to filter a wide range of harmonics. Wide spectrum filter is essentially a low-pass filter for the harmonic at fundamental frequency. It can also be considered as a single-stage passive filter plus input and output inductors. The work proposed gives a complete analysis of wide spectrum harmonic passive filters, the methodology to choose its parameters according to the operational condition, effect of load and source inductance on its characteristics. Also, comparison of the performance of the wide band passive filter with tuned filter is given. The analyses are supported with the simulation results and were verified experimentally. The analysis given in this thesis will be useful for the selection of proper wide spectrum harmonic filters for harmonic mitigation applications in oil and gas industry.

  3. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  4. Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective

    PubMed Central

    Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan

    2015-01-01

    Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103

  5. Analytical and experimental study of high phase order induction motors

    NASA Technical Reports Server (NTRS)

    Klingshirn, Eugene A.

    1989-01-01

    Induction motors having more than three phases were investigated to determine their suitability for electric vehicle applications. The objective was to have a motor with a current rating lower than that of a three-phase motor. The name chosen for these is high phase order (HPO) motors. Motors having six phases and nine phases were given the most attention. It was found that HPO motors are quite suitable for electric vehicles, and for many other applications as well. They have characteristics which are as good as or better than three-phase motors for practically all applications where polyphase induction motors are appropriate. Some of the analysis methods are presented, and several of the equivalent circuits which facilitate the determination of harmonic currents and losses, or currents with unbalanced sources, are included. The sometimes large stator currents due to harmonics in the source voltages are pointed out. Filters which can limit these currents were developed. An analysis and description of these filters is included. Experimental results which confirm and illustrate much of the theory are also included. These include locked rotor test results and full-load performance with an open phase. Also shown are oscillograms which display the reduction in harmonic currents when a filter is used with the experimental motor supplied by a non-sinusoidal source.

  6. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.

  7. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  8. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  9. Mapping Atmospheric Moisture Climatologies across the Conterminous United States

    PubMed Central

    Daly, Christopher; Smith, Joseph I.; Olson, Keith V.

    2015-01-01

    Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026

  10. Comparison of different interpolation operators including nonlinear subdivision schemes in the simulation of particle trajectories

    NASA Astrophysics Data System (ADS)

    Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques

    2013-03-01

    In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.

  11. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  12. Potentials Unbounded Below

    NASA Astrophysics Data System (ADS)

    Curtright, Thomas

    2011-04-01

    Continuous interpolates are described for classical dynamical systems defined by discrete time-steps. Functional conjugation methods play a central role in obtaining the interpolations. The interpolates correspond to particle motion in an underlying potential, V. Typically, V has no lower bound and can exhibit switchbacks wherein V changes form when turning points are encountered by the particle. The Beverton-Holt and Skellam models of population dynamics, and particular cases of the logistic map are used to illustrate these features.

  13. New Evidence That Nonlinear Source-Filter Coupling Affects Harmonic Intensity and fo Stability During Instances of Harmonics Crossing Formants.

    PubMed

    Maxfield, Lynn; Palaparthi, Anil; Titze, Ingo

    2017-03-01

    The traditional source-filter theory of voice production describes a linear relationship between the source (glottal flow pulse) and the filter (vocal tract). Such a linear relationship does not allow for nor explain how changes in the filter may impact the stability and regularity of the source. The objective of this experiment was to examine what effect unpredictable changes to vocal tract dimensions could have on fo stability and individual harmonic intensities in situations in which low frequency harmonics cross formants in a fundamental frequency glide. To determine these effects, eight human subjects (five male, three female) were recorded producing fo glides while their vocal tracts were artificially lengthened by a section of vinyl tubing inserted into the mouth. It was hypothesized that if the source and filter operated as a purely linear system, harmonic intensities would increase and decrease at nearly the same rates as they passed through a formant bandwidth, resulting in a relatively symmetric peak on an intensity-time contour. Additionally, fo stability should not be predictably perturbed by formant/harmonic crossings in a linear system. Acoustic analysis of these recordings, however, revealed that harmonic intensity peaks were asymmetric in 76% of cases, and that 85% of fo instabilities aligned with a crossing of one of the first four harmonics with the first three formants. These results provide further evidence that nonlinear dynamics in the source-filter relationship can impact fo stability as well as harmonic intensities as harmonics cross through formant bandwidths. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2014-12-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar Hydrostar 4300, GPS devices Ashtech Promark 500 - base, and a Thales Z-Max - rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: to compare the efficiency of 16 different interpolation methods, to discover the most appropriate interpolators for the development of a raster model, to calculate the surface area and volume of Lake Vrana, and to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was ROF multi-quadratic, and the best geostatistical, ordinary cokriging. The mean quadratic error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in 2 phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  15. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  16. On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids

    NASA Astrophysics Data System (ADS)

    Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.

    2017-03-01

    The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.

  17. LIP: The Livermore Interpolation Package, Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-07-06

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  18. LIP: The Livermore Interpolation Package, Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsch, F N

    2011-01-04

    This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less

  19. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  20. Activity measurements of 55Fe by two different methods

    NASA Astrophysics Data System (ADS)

    da Cruz, Paulo A. L.; Iwahara, Akira; da Silva, Carlos J.; Poledna, Roberto; Loureiro, Jamir S.; da Silva, Monica A. L.; Ruzzarin, Anelise

    2018-03-01

    A calibrated germanium detector and CIEMAT/NIST liquid scintillation method were used in the standardization of solution of 55Fe coming from a key-comparison BIPM. Commercial cocktails were used in source preparation for activity measurements in CIEMAT/NIST method. Measurements were performed in Liquid Scintillation Counter. In the germanium counting method standard point sources were prepared for obtaining atomic number versus efficiency curve of the detector in order to obtain the efficiency of 5.9 keV KX-ray of 55Fe by interpolation. The activity concentrations obtained were 508.17 ± 3.56 and 509.95 ± 16.20 kBq/g for CIEMAT/NIST and germanium methods, respectively.

  1. Area under precision-recall curves for weighted and unweighted data.

    PubMed

    Keilwagen, Jens; Grosse, Ivo; Grau, Jan

    2014-01-01

    Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers.

  2. Area under Precision-Recall Curves for Weighted and Unweighted Data

    PubMed Central

    Grosse, Ivo

    2014-01-01

    Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers. PMID:24651729

  3. Optimization and comparison of three spatial interpolation methods for electromagnetic levels in the AM band within an urban area.

    PubMed

    Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio

    2018-04-01

    A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  5. Nonlinear Effects in Three-minute Oscillations of the Solar Chromosphere. I. An Analytical Nonlinear Solution and Detection of the Second Harmonic

    NASA Astrophysics Data System (ADS)

    Chae, Jongchul; Litvinenko, Yuri E.

    2017-08-01

    The vertical propagation of nonlinear acoustic waves in an isothermal atmosphere is considered. A new analytical solution that describes a finite-amplitude wave of an arbitrary wavelength is obtained. Although the short- and long-wavelength limits were previously considered separately, the new solution describes both limiting cases within a common framework and provides a straightforward way of interpolating between the two limits. Physical features of the nonlinear waves in the chromosphere are described, including the dispersive nature of low-frequency waves, the steepening of the wave profile, and the influence of the gravitational field on wavefront breaking and shock formation. The analytical results suggest that observations of three-minute oscillations in the solar chromosphere may reveal the basic nonlinear effect of oscillations with combination frequencies, superposed on the normal oscillations of the system. Explicit expressions for a second-harmonic signal and the ratio of its amplitude to the fundamental harmonic amplitude are derived. Observational evidence of the second harmonic, obtained with the Fast Imaging Solar Spectrograph, installed at the 1.6 m New Solar Telescope of the Big Bear Observatory, is presented. The presented data are based on the time variations of velocity determined from the Na I D2 and Hα lines.

  6. A simple-harmonic model for depicting the annual cycle of seasonal temperatures of streams

    USGS Publications Warehouse

    Steele, Timothy Doak

    1978-01-01

    Due to economic or operational constraints, stream-temperature records cannot always be collected at all sites where information is desired or at frequencies dictated by continuous or near-continuous surveillance requirements. For streams where only periodic measurements are made during the year, and that are not appreciably affected by regulation or by thermal loading , a simple harmonic function may adequately depict the annual seasonal cycle of stream temperature at any given site. Resultant harmonic coefficients obtained from available stream-temperature records may be used in the following ways: (1) To interpolate between discrete measurements by solving the harmonic function at specified times, thereby filling in estimates of stream-temperature values; (2) to characterize areal or regional patterns of natural stream-temperature values; (2) to characterize areal or regional patterns of natural stream-temperature conditions; and (3) to detect and to assess any significant at a site brought about by streamflow regulation or basin development. Moreover, less-than-daily or sampling frequencies at a given site may give estimates of annual variation of stream temperatures that are statistically comparable to estimates obtained from a daily or continuous sampling scheme. The latter procedure may result in potential savings of resources in network operations, with negligible loss in information on annual stream-temperature variations. (Woodard -USGS)

  7. Analytic Reflected Lightcurves for Exoplanets

    NASA Astrophysics Data System (ADS)

    Haggard, Hal M.; Cowan, Nicolas B.

    2018-04-01

    The disk-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motion coupled with an inhomogeneous albedo map. We have previously derived analytic reflected lightcurves for spherical harmonic albedo maps in the special case of a synchronously-rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard 2013). In this letter, we present analytic reflected lightcurves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_l^m-maps). In particular, we use Wigner D-matrices to express an harmonic lightcurve for an arbitrary viewing geometry as a non-linear combination of harmonic lightcurves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected lightcurves, as well as fast calculation of lightcurves for mapping exoplanets based on time-resolved photometry. To these ends we make available Exoplanet Analytic Reflected Lightcurves (EARL), a simple open-source code that allows rapid computation of reflected lightcurves.

  8. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management.

    PubMed

    Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex

    2014-08-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli shoreline. The implications for aquifer management are discussed.

  9. Harmonic elastic inclusions in the presence of point moment

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Schiavone, Peter

    2017-12-01

    We employ conformal mapping techniques to design harmonic elastic inclusions when the surrounding matrix is simultaneously subjected to remote uniform stresses and a point moment located at an arbitrary position in the matrix. Our analysis indicates that the uniform and hydrostatic stress field inside the inclusion as well as the constant hoop stress along the entire inclusion-matrix interface (on the matrix side) are independent of the action of the point moment. In contrast, the non-elliptical shape of the harmonic inclusion depends on both the remote uniform stresses and the point moment.

  10. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  11. A practical implementation of wave front construction for 3-D isotropic media

    NASA Astrophysics Data System (ADS)

    Chambers, K.; Kendall, J.-M.

    2008-06-01

    Wave front construction (WFC) methods are a useful tool for tracking wave fronts and are a natural extension to standard ray shooting methods. Here we describe and implement a simple WFC method that is used to interpolate wavefield properties throughout a 3-D heterogeneous medium. Our approach differs from previous 3-D WFC procedures primarily in the use of a ray interpolation scheme, based on approximating the wave front as a `locally spherical' surface and a `first arrival mode', which reduces computation times, where only first arrivals are required. Both of these features have previously been included in 2-D WFC algorithms; however, until now they have not been extended to 3-D systems. The wave front interpolation scheme allows for rays to be traced from a nearly arbitrary distribution of take-off angles, and the calculation of derivatives with respect to take-off angles is not required for wave front interpolation. However, in regions of steep velocity gradient, the locally spherical approximation is not valid, and it is necessary to backpropagate rays to a sufficiently homogenous region before interpolation of the new ray. Our WFC technique is illustrated using a realistic velocity model, based on a North Sea oil reservoir. We examine wavefield quantities such as traveltimes, ray angles, source take-off angles and geometrical spreading factors, all of which are interpolated on to a regular grid. We compare geometrical spreading factors calculated using two methods: using the ray Jacobian and by taking the ratio of a triangular area of wave front to the corresponding solid angle at the source. The results show that care must be taken when using ray Jacobians to calculate geometrical spreading factors, as the poles of the source coordinate system produce unreliable values, which can be spread over a large area, as only a few initial rays are traced in WFC. We also show that the use of the first arrival mode can reduce computation time by ~65 per cent, with the accuracy of the interpolated traveltimes, ray angles and source take-off angles largely unchanged. However, the first arrival mode does lead to inaccuracies in interpolated angles near caustic surfaces, as well as small variations in geometrical spreading factors for ray tubes that have passed through caustic surfaces.

  12. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  13. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    NASA Astrophysics Data System (ADS)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  14. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  15. Comparison of fine structures of electron cyclotron harmonic emissions in aurora

    NASA Astrophysics Data System (ADS)

    LaBelle, J.; Dundek, M.

    2015-10-01

    Recent discoveries of higher harmonic cyclotron emissions in aurora occurring under daylight conditions motivated the modification of radio receivers at South Pole Station, Antarctica, to measure fine structure of such emissions during two consecutive austral summers, 2013-2014 and 2014-2015. The experiment recorded 347 emission events over 376 days of observation. The seasonal distribution of these events reveals that successively higher harmonics require higher solar zenith angles for occurrence, as expected if they are generated at the matching condition fuh = Nfce, which for higher N requires higher electron densities which are associated with higher solar zenith angles. This result implies that generation of higher harmonics from lower harmonics via wave-wave processes explains only a minority of events. Detailed examination of 21 cases in which two harmonics occur simultaneously shows that in almost all events the higher harmonic comes from higher altitudes, and only for a small fraction of events is it plausible that the frequencies of the fine structures of the emissions are correlated and in exact integer ratio. This observation puts an upper bound of 15-20% on the fraction of emissions which can be explained by wave-wave interactions involving Z mode waves at fce and, combined with consideration of source altitudes, puts an upper bound of 75% on the fraction explained by coalescence of Z mode waves at 2fce. Taken together, these results suggest that the dominant mechanism for the higher harmonics is independent generation at the matching points fuh = Nfce and that the wave-wave interaction mechanisms explain a relatively small fraction of events.

  16. Interpreting the Geochemistry of the Northern Peninsula Ranges Batholith Using Principle Component Analysis and Spatial Interpolation

    NASA Astrophysics Data System (ADS)

    Pompe, L.; Clausen, B. L.; Morton, D. M.

    2014-12-01

    The Cretaceous northern Peninsular Ranges batholith (PRB) exemplifies emplacement in a combination oceanic arc / continental margin arc setting. Two approaches that can aid in understanding its statistical and spatial geochemistry variation are principle component analysis (PCA) and GIS interpolation mapping. The data analysis primarily used 287 samples from the large granitoid geochemical data set systematically collected by Baird and Welday. Of these, 80 points fell in the western Santa Ana block, 108 in the transitional Perris block, and 99 in the eastern San Jacinto block. In the statistical analysis, multivariate outliers were identified using Mahalanobis distance and excluded. A centered log ratio transformation was used to facilitate working with geochemical concentration values that range over many orders of magnitude. The data was then analyzed using PCA with IBM SPSS 21 reducing 40 geochemical variables to 4 components which are approximately related to the compatible, HFS, HRE, and LIL elements. The 4 components were interpreted as follows: (1) compatible [and negatively correlated incompatible] elements indicate extent of differentiation as typified by SiO2, (2) HFS elements indicate crustal contamination as typified by Sri and Nb/Yb ratios, (3) HRE elements indicate source depth as typified by Sr/Y and Gd/Yb ratios, and (4) LIL elements indicate alkalinity as typified by the K2O/SiO2ratio. Spatial interpolation maps of the 4 components were created with Esri ArcGIS for Desktop 10.2 by interpolating between the sample points using kriging and inverse distance weighting. Across-arc trends on the interpolation maps indicate a general increase from west to east for each of the 4 components, but with local exceptions as follows. The 15km offset on the San Jacinto Fault may be affecting the contours. South of San Jacinto is a west-east band of low Nb/Yb, Gd/Yb, and Sr/Y ratios. The highest Sr/Y ratios in the north central area that decrease further east may be due to the far eastern granitoids being transported above a shear zone. Along the western edge of the PRB, high SiO2 and K2O/SiO2 are interpreted to result from sampling shallow levels in the batholith (2-3 kb), as compared to deeper levels in the central (5-6 kb) and eastern (4.5 kb) areas.

  17. Parametric phase conjugation for the second harmonic of a nonlinear ultrasonic beam

    NASA Astrophysics Data System (ADS)

    Brysev, A. P.; Bunkin, F. V.; Hamilton, M. F.; Klopotov, R. V.; Krutyanskii, L. M.; Yan, K.

    2003-01-01

    The effect of phase conjugation for the second harmonic of a focused ultrasonic beam was investigated experimentally and by numerical simulation. An ultrasonic pulse with the carrier frequency f=3 MHz was emitted into water and focused at a point between the source and the phase conjugating system. The phase conjugation for the second harmonic of the incident wave (2 f=6 MHz) was performed in a magnetostrictive ceramic as a result of the parametric interaction of the incident wave with the pumping magnetic field (the pumping frequency was f p=4 f=12 MHz). The axial and focal distributions of sound pressure in the incident and conjugated beams were measured using a broadband PVDF membrane hydrophone. The corresponding calculations were performed by solving numerically the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation allowing for the nonlinearity, diffraction, and thermoviscous absorption. The results of measurements agreed well with the calculations and showed that the field of a conjugate wave adequately reproduces the field of the second harmonic of the incident wave. A certain advantage of focusing with the phase conjugation for the second harmonic was demonstrated in comparison with the operation at the doubled frequency of the incident wave. The results of this study can serve as a basis for the utilization of the phase conjugation of harmonics in ultrasonic tomography and nondestructive testing.

  18. Atomic-scale origin of dynamic viscoelastic response and creep in disordered solids

    NASA Astrophysics Data System (ADS)

    Milkus, Rico; Zaccone, Alessio

    2017-02-01

    Viscoelasticity has been described since the time of Maxwell as an interpolation of purely viscous and purely elastic response, but its microscopic atomic-level mechanism in solids has remained elusive. We studied three model disordered solids: a random lattice, the bond-depleted fcc lattice, and the fcc lattice with vacancies. Within the harmonic approximation for central-force lattices, we applied sum rules for viscoelastic response derived on the basis of nonaffine atomic motions. The latter motions are a direct result of local structural disorder, and in particular, of the lack of inversion symmetry in disordered lattices. By defining a suitable quantitative and general atomic-level measure of nonaffinity and inversion symmetry, we show that the viscoelastic responses of all three systems collapse onto a master curve upon normalizing by the overall strength of inversion-symmetry breaking in each system. Close to the isostatic point for central-force lattices, power-law creep G (t ) ˜t-1 /2 emerges as a consequence of the interplay between soft vibrational modes and nonaffine dynamics, and various analytical scalings, supported by numerical calculations, are predicted by the theory.

  19. Health State Monitoring of Bladed Machinery with Crack Growth Detection in BFG Power Plant Using an Active Frequency Shift Spectral Correction Method.

    PubMed

    Sun, Weifang; Yao, Bin; He, Yuchao; Chen, Binqiang; Zeng, Nianyin; He, Wangpeng

    2017-08-09

    Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages.

  20. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    PubMed

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  1. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  2. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  3. Applicability of Various Interpolation Approaches for High Resolution Spatial Mapping of Climate Data in Korea

    NASA Astrophysics Data System (ADS)

    Jo, A.; Ryu, J.; Chung, H.; Choi, Y.; Jeon, S.

    2018-04-01

    The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m) by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA). Automatic Weather System (AWS) and Automated Synoptic Observing System (ASOS) data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478) and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM) with 30 m resolution, inverse distance weighting (IDW), co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.

  4. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  5. Extraction and Use of Noise Models from Production-Mode Transient Electromagnetic Data

    NASA Astrophysics Data System (ADS)

    Rasmussen, S.; Nyboe, N. S.; Larsen, J. J.

    2016-12-01

    In the interpretation of data acquired using the Transient Electromagnetic Method (TEM), noise in the measurements from external sources, such as the power grid, spherics and radio transmitters and from internal sources in the TEM system itself is unavoidable. This noise lowers the data quality, and it is therefore desirable to know the noise conditions.Typically, the noise spectrum is measured one or more times during a survey with the transmitter turned off, i.e. with no TEM signal present.In production-mode, when the pulses of alternating signs are continually transmitted, the TEM signal contributes powerful, narrow spikes to the spectrum at the odd harmonics of the waveform repetition rate.In between these TEM-spikes, the noise spectrum is preserved. Using a simple interpolation method and an appropriate spectral estimation method, we show how to recover an estimate of the clean noise spectrum from short intervals of production-mode data.The resulting estimate can be used for in-field tailoring the data acquisition strategy to the present conditions, specifically gating scheme, stacking scheme and repetition rate, such that less noise enters the measurements.Another application is in the interpretation phase, where the noise level for each gate can be computed and used as input to the inversion code.

  6. Finite-fault inversion of the Mw 5.9 2012 Emilia-Romagna earthquake (Northern Italy) using aftershocks as near-field Green's function approximations

    NASA Astrophysics Data System (ADS)

    Causse, Mathieu; Cultrera, Giovanna; Herrero, André; Courboulex, Françoise; Schiappapietra, Erika; Moreau, Ludovic

    2017-04-01

    On May 29, 2012 occurred a Mw 5.9 earthquake in the Emilia-Romagna region (Po Plain) on a thrust fault system. This shock, as well as hundreds of aftershocks, were recorded by 10 strong motion stations located less than 10 km away from the rupture plane, with 4 stations located within the surface rupture projection. The Po Plain is a very large EW trending syntectonic alluvial basin, delimited by the Alps and Apennines chains to the North and South. The Plio-Quaternary sedimentary sequence filling the Po Plain is characterized by an uneven thickness, ranging from several thousands of meters to a few tens of meters. This particular context results especially in a resonance basin below 1 Hz and strong surface waves, which makes it particularly difficult to model wave propagation and hence to obtain robust images of the rupture propagation. This study proposes to take advantage of the large set of recorded aftershocks, considered as point sources, to model wave propagation. Due to the heterogeneous distribution of the aftershocks on the fault plane, an interpolation technique is proposed to compute an approximation of the Green's function between each fault point and each strong motion station in the frequency range [0.2-1Hz]. We then use a Bayesian inversion technique (Monte Carlo Markov Chain algorithm) to obtain images of the rupture propagation from the strong motion data. We propose to retrieve the slip distribution by inverting the final slip value at some control points, which are allowed to move on the fault plane, and by interpolating the slip value between these points. We show that the use of 5 control points to describe the slip, coupled with the hypothesis of spatially constant rupture velocity and rise-time (that is 18 free source parameters), results in a good level of fit with the data. This indicates that despite their complexity, the strong motion data can be properly modeled up to 1 Hz using a relatively simple rupture. The inversion results also reveal that the rupture propagated slowly, at a speed of about 45% of the shear wave velocity.

  7. Kinetic Simulations of Type II Radio Burst Emission Processes

    NASA Astrophysics Data System (ADS)

    Ganse, U.; Spanier, F. A.; Vainio, R. O.

    2011-12-01

    The fundamental emission process of Type II Radio Bursts has been under discussion for many decades. While analytic deliberations point to three wave interaction as the source for fundamental and harmonic radio emissions, sparse in-situ observational data and high computational demands for kinetic simulations have not allowed for a definite conclusion to be reached. A popular model puts the radio emission into the foreshock region of a coronal mass ejection's shock front, where shock drift acceleration can create eletrcon beam populations in the otherwise quiescent foreshock plasma. Beam-driven instabilities are then assumed to create waves, forming the starting point of three wave interaction processes. Using our kinetic particle-in-cell code, we have studied a number of emission scenarios based on electron beam populations in a CME foreshock, with focus on wave-interaction microphysics on kinetic scales. The self-consistent, fully kinetic simulations with completely physical mass-ratio show fundamental and harmonic emission of transverse electromagnetic waves and allow for detailled statistical analysis of all contributing wavemodes and their couplings.

  8. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  9. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  10. Edge directed image interpolation with Bamberger pyramids

    NASA Astrophysics Data System (ADS)

    Rosiles, Jose Gerardo

    2005-08-01

    Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.

  11. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    NASA Astrophysics Data System (ADS)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  12. Rtop - an R package for interpolation of data with a variable spatial support - examples from river networks

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Laaha, Gregor; Koffler, Daniel; Blöschl, Günter; Pebesma, Edzer; Parajka, Juraj; Viglione, Alberto

    2013-04-01

    Geostatistical methods have been applied only to a limited extent for spatial interpolation in applications where the observations have an irregular support, such as runoff characteristics or population health data. Several studies have shown the potential of such methods (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006, Goovaerts 2008), but these developments have so far not led to easily accessible, versatile, easy to apply and open source software. Based on the top-kriging approach suggested by Skøien et al. (2006), we will here present the package rtop, which has been implemented in the statistical environment R (R Core Team 2012). Taking advantage of the existing methods in R for analysis of spatial objects (Bivand et al. 2008), and the extensive possibilities for visualizing the results, rtop makes it easy to apply geostatistical interpolation methods when observations have a non-point spatial support. Although the package is flexible regarding data input, the main application so far has been for interpolation along river networks. We will present some examples showing how the package can easily be used for such interpolation. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Bivand, R.S., Pebesma, E.J. & Gómez-Rubio, V., 2008. Applied spatial data analysis with r: Springer. Goovaerts, P., 2008. Kriging and semivariogram deconvolution in the presence of irregular geographical units. Mathematical Geosciences, 40 (1), 101-128. Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Core Team, 2012. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.

  13. Three-dimensional data interpolation for environmental purpose: lead in contaminated soils in southern Brazil.

    PubMed

    Piedade, Tales Campos; Melo, Vander Freitas; Souza, Luiz Cláudio Paula; Dieckow, Jeferson

    2014-09-01

    Monitoring of heavy metal contamination plume in soils can be helpful in establishing strategies to minimize its hazardous impacts to the environment. The objective of this study was to apply a new approach of visualization, based on tridimensional (3D) images, of pseudo-total (extracted with concentrated acids) and exchangeable (extracted with 0.5 mol L(-1) Ca(NO3)2) lead (Pb) concentrations in soils of a mining and metallurgy area to determine the spatial distribution of this pollutant and to estimate the most contaminated soil volumes. Tridimensional images were obtained after interpolation of Pb concentrations of 171 soil samples (57 points × 3 depths) with regularized spline with tension in a 3D function version. The tridimensional visualization showed great potential of use in environmental studies and allowed to determine the spatial 3D distribution of Pb contamination plume in the area and to establish relationships with soil characteristics, landscape, and pollution sources. The most contaminated soil volumes (10,001 to 52,000 mg Pb kg(-1)) occurred near the metallurgy factory. The main contamination sources were attributed to atmospheric emissions of particulate Pb through chimneys. The large soil volume estimated to be removed to industrial landfills or co-processing evidenced the difficulties related to this practice as a remediation strategy.

  14. Software for C1 interpolation

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1977-01-01

    The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.

  15. Cubature versus Fekete-Gauss nodes for spectral element methods on simplicial meshes

    NASA Astrophysics Data System (ADS)

    Pasquetti, Richard; Rapetti, Francesca

    2017-10-01

    In a recent JCP paper [9], a higher order triangular spectral element method (TSEM) is proposed to address seismic wave field modeling. The main interest of this TSEM is that the mass matrix is diagonal, so that an explicit time marching becomes very cheap. This property results from the fact that, similarly to the usual SEM (say QSEM), the basis functions are Lagrange polynomials based on a set of points that shows both nice interpolation and quadrature properties. In the quadrangle, i.e. for the QSEM, the set of points is simply obtained by tensorial product of Gauss-Lobatto-Legendre (GLL) points. In the triangle, finding such an appropriate set of points is however not trivial. Thus, the work of [9] follows anterior works that started in 2000's [2,6,11] and now provides cubature nodes and weights up to N = 9, where N is the total degree of the polynomial approximation. Here we wish to evaluate the accuracy of this cubature nodes TSEM with respect to the Fekete-Gauss one, see e.g.[12], that makes use of two sets of points, namely the Fekete points and the Gauss points of the triangle for interpolation and quadrature, respectively. Because the Fekete-Gauss TSEM is in the spirit of any nodal hp-finite element methods, one may expect that the conclusions of this Note will remain relevant if using other sets of carefully defined interpolation points.

  16. The development of magnetic field measurement system for drift-tube linac quadrupole

    NASA Astrophysics Data System (ADS)

    Zhou, Jianxin; Kang, Wen; Yin, Baogui; Peng, Quanling; Li, Li; Liu, Huachang; Gong, Keyun; Li, Bo; Chen, Qiang; Li, Shuai; Liu, Yiqin

    2015-06-01

    In the China Spallation Neutron Source (CSNS) linac, a conventional 324 MHz drift-tube linac (DTL) accelerating an H- ion beam from 3 MeV to 80 MeV has been designed and manufactured. The electromagnetic quadrupoles (EMQs) are widely used in a DTL accelerator. The main challenge of DTLQ's structure is to house a strong gradient EMQ in the much reduced space of the drift-tube (DT). To verify the DTLQ's design specifications and fabrication quality, a precision harmonic coil measurement system has been developed, which is based on the high precision movement platform, the harmonic coil with ceramic frame and the special method to make the harmonic coil and the quadrupoles coaxial. After more than one year's continuous running, the magnetic field measurement system still performs accurately and stably. The field measurement of more than one hundred DTLQ has been finished. The components and function of the measurement system, the key point of the technology and the repeatability of the measurement results are described in this paper.

  17. A modified dual-level algorithm for large-scale three-dimensional Laplace and Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Li, Junpu; Chen, Wen; Fu, Zhuojia

    2018-01-01

    A modified dual-level algorithm is proposed in the article. By the help of the dual level structure, the fully-populated interpolation matrix on the fine level is transformed to a local supported sparse matrix to solve the highly ill-conditioning and excessive storage requirement resulting from fully-populated interpolation matrix. The kernel-independent fast multipole method is adopted to expediting the solving process of the linear equations on the coarse level. Numerical experiments up to 2-million fine-level nodes have successfully been achieved. It is noted that the proposed algorithm merely needs to place 2-3 coarse-level nodes in each wavelength per direction to obtain the reasonable solution, which almost down to the minimum requirement allowed by the Shannon's sampling theorem. In the real human head model example, it is observed that the proposed algorithm can simulate well computationally very challenging exterior high-frequency harmonic acoustic wave propagation up to 20,000 Hz.

  18. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    NASA Astrophysics Data System (ADS)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.

  19. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  20. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  2. Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing

    NASA Technical Reports Server (NTRS)

    Schowengerdt, R.; Gray, S.; Park, S. K.

    1984-01-01

    Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.

  3. Automated Testcase Generation for Numerical Support Functions in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Schnieder, Stefan-Alexander

    2014-01-01

    We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.

  4. Frictional-faulting model for harmonic tremor before Redoubt Volcano eruptions

    NASA Astrophysics Data System (ADS)

    Dmitrieva, Ksenia; Hotovec-Ellis, Alicia J.; Prejean, Stephanie; Dunham, Eric M.

    2013-08-01

    Seismic unrest, indicative of subsurface magma transport and pressure changes within fluid-filled cracks and conduits, often precedes volcanic eruptions. An intriguing form of volcano seismicity is harmonic tremor, that is, sustained vibrations in the range of 0.5-5Hz. Many source processes can generate harmonic tremor. Harmonic tremor in the 2009 eruption of Redoubt Volcano, Alaska, has been linked to repeating earthquakes of magnitudes around 0.5-1.5 that occur a few kilometres beneath the vent. Before many explosions in that eruption, these small earthquakes occurred in such rapid succession--up to 30 events per second--that distinct seismic wave arrivals blurred into continuous, high-frequency tremor. Tremor abruptly ceased about 30 s before the explosions. Here we introduce a frictional-faulting model to evaluate the credibility and implications of this tremor mechanism. We find that the fault stressing rates rise to values ten orders of magnitude higher than in typical tectonic settings. At that point, inertial effects stabilize fault sliding and the earthquakes cease. Our model of the Redoubt Volcano observations implies that the onset of volcanic explosions is preceded by active deformation and extreme stressing within a localized region of the volcano conduit, at a depth of several kilometres.

  5. Analytic reflected light curves for exoplanets

    NASA Astrophysics Data System (ADS)

    Haggard, Hal M.; Cowan, Nicolas B.

    2018-07-01

    The disc-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motions coupled with an inhomogeneous albedo map. We have previously derived analytic reflected light curves for spherical harmonic albedo maps in the special case of a synchronously rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard). In this paper, we present analytic reflected light curves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_ l^m-maps). In particular, we use Wigner D-matrices to express an harmonic light curve for an arbitrary viewing geometry as a non-linear combination of harmonic light curves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected light curves, as well as fast calculation of light curves for mapping exoplanets based on time-resolved photometry. To these ends, we make available Exoplanet Analytic Reflected Lightcurves, a simple open-source code that allows rapid computation of reflected light curves.

  6. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  7. A discrete spherical harmonics method for radiative transfer analysis in inhomogeneous polarized planar atmosphere

    NASA Astrophysics Data System (ADS)

    Tapimo, Romuald; Tagne Kamdem, Hervé Thierry; Yemele, David

    2018-03-01

    A discrete spherical harmonics method is developed for the radiative transfer problem in inhomogeneous polarized planar atmosphere illuminated at the top by a collimated sunlight while the bottom reflects the radiation. The method expands both the Stokes vector and the phase matrix in a finite series of generalized spherical functions and the resulting vector radiative transfer equation is expressed in a set of polar directions. Hence, the polarized characteristics of the radiance within the atmosphere at any polar direction and azimuthal angle can be determined without linearization and/or interpolations. The spatial dependent of the problem is solved using the spectral Chebyshev method. The emergent and transmitted radiative intensity and the degree of polarization are predicted for both Rayleigh and Mie scattering. The discrete spherical harmonics method predictions for optical thin atmosphere using 36 streams are found in good agreement with benchmark literature results. The maximum deviation between the proposed method and literature results and for polar directions \\vert μ \\vert ≥0.1 is less than 0.5% and 0.9% for the Rayleigh and Mie scattering, respectively. These deviations for directions close to zero are about 3% and 10% for Rayleigh and Mie scattering, respectively.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chae, Jongchul; Litvinenko, Yuri E.

    The vertical propagation of nonlinear acoustic waves in an isothermal atmosphere is considered. A new analytical solution that describes a finite-amplitude wave of an arbitrary wavelength is obtained. Although the short- and long-wavelength limits were previously considered separately, the new solution describes both limiting cases within a common framework and provides a straightforward way of interpolating between the two limits. Physical features of the nonlinear waves in the chromosphere are described, including the dispersive nature of low-frequency waves, the steepening of the wave profile, and the influence of the gravitational field on wavefront breaking and shock formation. The analytical resultsmore » suggest that observations of three-minute oscillations in the solar chromosphere may reveal the basic nonlinear effect of oscillations with combination frequencies, superposed on the normal oscillations of the system. Explicit expressions for a second-harmonic signal and the ratio of its amplitude to the fundamental harmonic amplitude are derived. Observational evidence of the second harmonic, obtained with the Fast Imaging Solar Spectrograph, installed at the 1.6 m New Solar Telescope of the Big Bear Observatory, is presented. The presented data are based on the time variations of velocity determined from the Na i D{sub 2} and H α lines.« less

  9. Bright high-repetition-rate source of narrowband extreme-ultraviolet harmonics beyond 22 eV

    PubMed Central

    Wang, He; Xu, Yiming; Ulonska, Stefan; Robinson, Joseph S.; Ranitovic, Predrag; Kaindl, Robert A.

    2015-01-01

    Novel table-top sources of extreme-ultraviolet light based on high-harmonic generation yield unique insight into the fundamental properties of molecules, nanomaterials or correlated solids, and enable advanced applications in imaging or metrology. Extending high-harmonic generation to high repetition rates portends great experimental benefits, yet efficient extreme-ultraviolet conversion of correspondingly weak driving pulses is challenging. Here, we demonstrate a highly-efficient source of femtosecond extreme-ultraviolet pulses at 50-kHz repetition rate, utilizing the ultraviolet second-harmonic focused tightly into Kr gas. In this cascaded scheme, a photon flux beyond ≈3 × 1013 s−1 is generated at 22.3 eV, with 5 × 10−5 conversion efficiency that surpasses similar harmonics directly driven by the fundamental by two orders-of-magnitude. The enhancement arises from both wavelength scaling of the atomic dipole and improved spatio-temporal phase matching, confirmed by simulations. Spectral isolation of a single 72-meV-wide harmonic renders this bright, 50-kHz extreme-ultraviolet source a powerful tool for ultrafast photoemission, nanoscale imaging and other applications. PMID:26067922

  10. Meshless Method with Operator Splitting Technique for Transient Nonlinear Bioheat Transfer in Two-Dimensional Skin Tissues

    PubMed Central

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-01

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180

  11. Meshless method with operator splitting technique for transient nonlinear bioheat transfer in two-dimensional skin tissues.

    PubMed

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-16

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.

  12. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  13. Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.

  14. Health State Monitoring of Bladed Machinery with Crack Growth Detection in BFG Power Plant Using an Active Frequency Shift Spectral Correction Method

    PubMed Central

    Sun, Weifang; Yao, Bin; He, Yuchao; Zeng, Nianyin; He, Wangpeng

    2017-01-01

    Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages. PMID:28792453

  15. Circular current loops, magnetic dipoles and spherical harmonic analysis.

    USGS Publications Warehouse

    Alldredge, L.R.

    1980-01-01

    Spherical harmonic analysis (SHA) is the most used method of describing the Earth's magnetic field, even though spherical harmonic coefficients (SHC) almost completely defy interpretation in terms of real sources. Some moderately successful efforts have been made to represent the field in terms of dipoles placed in the core in an effort to have the model come closer to representing real sources. Dipole sources are only a first approximation to the real sources which are thought to be a very complicated network of electrical currents in the core of the Earth. -Author

  16. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  17. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  18. AGN Accretion Physics in the Time Domain: Survey Cadences, Stochastic Analysis, and Physical Interpretations

    NASA Astrophysics Data System (ADS)

    Moreno, Jackeline; Vogeley, Michael S.; Richards, Gordon; O'Brien, John T.; Kasliwal, Vishal

    2018-01-01

    We present rigorous testing of survey cadences (K2, SDSS, CRTS, & Pan-STARRS) for quasar variability science using a magnetohydrodynamics synthetic lightcurve and the canonical lightcurve from Kepler, Zw 229.15. We explain where the state of the art is in regards to physical interpretations of stochastic models (CARMA) applied to AGN variability. Quasar variability offers a time domain approach of probing accretion physics at the SMBH scale. Evidence shows that the strongest amplitude changes in the brightness of AGN occur on long timescales ranging from months to hundreds of days. These global behaviors can be constrained by survey data despite low sampling resolution. CARMA processes provide a flexible family of models used to interpolate between data points, predict future observations and describe behaviors in a lightcurve. This is accomplished by decomposing a signal into rise and decay timescales, frequencies for cyclic behavior and shock amplitudes. Characteristic timescales may point to length-scales over which a physical process operates such as turbulent eddies, warping or hotspots due to local thermal instabilities. We present the distribution of SDSS Stripe 82 quasars in CARMA parameters space that pass our cadence tests and also explain how the Damped Harmonic Oscillator model, CARMA(2,1), reduces to the Damped Random Walk, CARMA(1,0), given the data in a specific region of the parameter space.

  19. The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators

    NASA Astrophysics Data System (ADS)

    Ahmedov, Anvarjon

    2018-03-01

    In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.

  20. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  1. Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points

    NASA Astrophysics Data System (ADS)

    Adamowicz, Tomasz; Björn, Anders; Björn, Jana

    2014-11-01

    We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.

  2. Observations of volcanic tremor during January-February 2005 eruption of Mt. Veniaminof, Alaska

    USGS Publications Warehouse

    De Angelis, Slivio; McNutt, Stephen R.

    2007-01-01

    Mt. Veniaminof, Alaska Peninsula, is a stratovolcano with a summit ice-filled caldera containing a small intracaldera cone and active vent. From January 2 to February 21, 2005, Mt. Veniaminof erupted. The eruption was characterized by numerous small ash emissions (VEI 0 to 1) and accompanied by low-frequency earthquake activity and volcanic tremor. We have performed spectral analyses of the seismic signals in order to characterize them and to constrain their source. Continuous tremor has durations of minutes to hours with dominant energy in the band 0.5– 4.0 Hz, and spectra characterized by narrow peaks either irregularly (non-harmonic tremor) or regularly spaced (harmonic tremor). The spectra of non-harmonic tremor resemble those of low-frequency events recorded simultaneously with surface ash explosions, suggesting that the source mechanisms might be similar or related. We propose that non-harmonic tremor at Mt. Veniaminof results from the coalescence of gas bubbles while low-frequency events are related to the disruption of large gas pockets within the conduit. Harmonic tremor, characterized by regular and quasisinusoidal waveforms, has duration of hours. Spectra containing up to five harmonics suggest the presence of a resonating source volume that vibrates in a longitudinal acoustic mode. An interesting feature of harmonic tremor is that frequency is observed to change over time; spectral lines move towards higher or lower values while the harmonic nature of the spectra is maintained. Factors controlling the variable characteristics of harmonic tremor include changes in acoustic velocity at the source and variations of the effective size of the resonator.

  3. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    NASA Astrophysics Data System (ADS)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  4. Spatial Interpolation of Reference Evapotranspiration in India: Comparison of IDW and Kriging Methods

    NASA Astrophysics Data System (ADS)

    Hodam, Sanayanbi; Sarkar, Sajal; Marak, Areor G. R.; Bandyopadhyay, A.; Bhadra, A.

    2017-12-01

    In the present study, to understand the spatial distribution characteristics of the ETo over India, spatial interpolation was performed on the means of 32 years (1971-2002) monthly data of 131 India Meteorological Department stations uniformly distributed over the country by two methods, namely, inverse distance weighted (IDW) interpolation and kriging. Kriging was found to be better while developing the monthly surfaces during cross-validation. However, in station-wise validation, IDW performed better than kriging in almost all the cases, hence is recommended for spatial interpolation of ETo and its governing meteorological parameters. This study also checked if direct kriging of FAO-56 Penman-Monteith (PM) (Allen et al. in Crop evapotranspiration—guidelines for computing crop water requirements, Irrigation and drainage paper 56, Food and Agriculture Organization of the United Nations (FAO), Rome, 1998) point ETo produced comparable results against ETo estimated with individually kriged weather parameters (indirect kriging). Indirect kriging performed marginally well compared to direct kriging. Point ETo values were extended to areal ETo values by IDW and FAO-56 PM mean ETo maps for India were developed to obtain sufficiently accurate ETo estimates at unknown locations.

  5. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  6. Morphing of spatial objects in real time with interpolation by functions of radial and orthogonal basis

    NASA Astrophysics Data System (ADS)

    Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai

    2018-05-01

    The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.

  7. Spatiotemporal Interpolation of Elevation Changes Derived from Satellite Altimetry for Jakobshavn Isbrae, Greenland

    NASA Technical Reports Server (NTRS)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sorensen, L. S.; Joughin, I. R.; Davis, C. H.; Krabill, W. B.

    2012-01-01

    Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dHdt) data over the area of interest. The largest dHdt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dHdt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbr, an outlet glacier for which widespread airborne validation data are available from NASAs Airborne Topographic Mapper (ATM). The four methods are ordinary kriging (OK), kriging with external drift (KED), where the spatial pattern of surface velocity is used as a proxy for that of dHdt, and their spatiotemporal equivalents (ST-OK and ST-KED).

  8. Small-angle scattering of polychromatic X-rays: effects of bandwidth, spectral shape and high harmonics.

    PubMed

    Chen, Sen; Luo, Sheng Nian

    2018-03-01

    Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10-100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are explored via Gaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamental harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.

  9. Small-angle scattering of polychromatic X-rays: effects of bandwidth, spectral shape and high harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Sen; Luo, Sheng-Nian

    Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10–100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are exploredviaGaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamentalmore » harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.« less

  10. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  11. Theory of high-order harmonic generation for gapless graphene

    NASA Astrophysics Data System (ADS)

    Zurrón, Óscar; Picón, Antonio; Plaja, Luis

    2018-05-01

    We study the high-harmonic spectrum emitted by a single-layer graphene, irradiated by an ultrashort intense infrared laser pulse. We show the emergence of the typical non-perturbative spectral features, harmonic plateau and cut-off, for mid-infrared driving fields, at fluences below the damage threshold. In contrast to previous works, using THz drivings, we demonstrate that the harmonic cut-off frequency saturates with the intensity. Our results are derived from the numerical integration of the time-dependent Schrödinger equation using a nearest neighbor tight-binding description of graphene. We also develop a saddle-point analysis that reveals a mechanism for harmonic emission in graphene different from that reported in atoms, molecules and finite gap solids. In graphene, the first step is initiated by the non-diabatic crossing of the valence band electron trajectories through the Dirac points, instead of tunneling ionization/excitation. We include a complete identification of the trajectories contributing to any particular high harmonic and reproduce the harmonic cut-off scaling with the driving intensity.

  12. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  13. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  14. Active control of the forced and transient response of a finite beam. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Post, John T.

    1990-01-01

    Structural vibrations from a point force are modelled on a finite beam. This research explores the theoretical limit on controlling beam vibrations utilizing another point source as an active controller. Three different types of excitation are considered, harmonic, random, and transient. For harmonic excitation, control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam. Control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, integrating the expected value of the displacement squared over the required interval, is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. The form of the controller is specified as either one or two delayed pulses, thus constraining the controller to be casual. The best possible control is examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses.

  15. Elastic-wave-mode separation in TTI media with inverse-distance weighted interpolation involving position shading

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu

    2017-10-01

    The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.

  16. "Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li+-benzene

    NASA Astrophysics Data System (ADS)

    D'Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.

    2015-08-01

    Quantum and anharmonic effects are investigated in (H2)2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li+-benzene complex increases the ZPE of the system by 5.6 kJ mol-1 to 17.6 kJ mol-1. This ZPE is 42% of the total electronic binding energy of (H2)2-Li+-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li+-benzene is 7.7 kJ mol-1, compared to 12.4 kJ mol-1 for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li+ ion and are more confined in the θ coordinate than in H2-Li+-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li+-benzene PESs are developed. These use a modified Shepard interpolation for the Li+-benzene and H2-Li+-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li+ terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol-1. Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol-1 error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.

  17. "Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li(+)-benzene.

    PubMed

    D'Arcy, Jordan H; Kolmann, Stephen J; Jordan, Meredith J T

    2015-08-21

    Quantum and anharmonic effects are investigated in (H2)2-Li(+)-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li(+)-benzene complex increases the ZPE of the system by 5.6 kJ mol(-1) to 17.6 kJ mol(-1). This ZPE is 42% of the total electronic binding energy of (H2)2-Li(+)-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li(+)-benzene is 7.7 kJ mol(-1), compared to 12.4 kJ mol(-1) for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li(+) ion and are more confined in the θ coordinate than in H2-Li(+)-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li(+)-benzene PESs are developed. These use a modified Shepard interpolation for the Li(+)-benzene and H2-Li(+)-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li(+) terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol(-1). Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol(-1) error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H2-H2 interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.

  18. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  19. Rtop - an R package for interpolation along the stream network

    NASA Astrophysics Data System (ADS)

    Skøien, J. O.; Laaha, G.; Koffler, D.; Blöschl, G.; Pebesma, E.; Parajka, J.; Viglione, A.

    2012-04-01

    Geostatistical methods have a long tradition within analysis of data that can be conceptualized as simple point data, such as soil properties, or for regular blocks, such as mining data. However, these methods have been used to a limited extent for estimation along stream networks. A few exceptions are given by (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006), and an overview by Laaha and Blöschl (2011). Interpolation of runoff characteristics are more complicated than the traditional random variables estimated by geostatistical methods, as the measurements have a more complicated support, and many catchments are nested. Skøien et al. (2006) presented the model Top-kriging which takes these effects into account for interpolation of stream flow characteristics (exemplified by the 100 year flood). The method has here been implemented as a package in the open source statistical environment R (R Development Core Team 2011). Taking advantage of the existing methods in R for working with spatial objects, and the extensive possibilities for visualizing the result, this makes it considerably easier to apply the method on new data sets, in comparison to earlier implementation of the method. In addition to user feedback, the package has also been tested by colleagues whose only responsibility has been to search for bugs, inconsistencies and shortcomings of the documentation. The last part is often the part that gets the least attention in small open source projects, and we have solved this by acknowledging their effects as co-authors. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. Laaha, G. & Blöschl, G. 2011. Geostatistics on river networks - a reviewed. EGU General Assembly, Vienna, Austria. R Development Core Team, 2011. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.

  20. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    NASA Astrophysics Data System (ADS)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  1. The Tides of the Atlantic Ocean, 60 degrees N to 30 degrees S

    NASA Astrophysics Data System (ADS)

    Cartwright, D. E.; Spencer, R.; Vassie, J. M.; Woodworth, P. L.

    1988-04-01

    As a sequel to Cartwright et al. (Phil. Trans. R. Soc. Lond. A298, 87-139 (1980)) (C.E.S.V.) an extended series of oceanic tidal pressure measurements in the Atlantic Ocean is described and the spatial properties of their spectral components are analysed. The principal linear admittances vary widely across the ocean basins, and clearly indicate the positions of the major amphidromes. Constants for the leading harmonics M2 and S2 are defined everywhere along the parallel of 53.6 degrees N and along a section from Natal (Brazil) and west Africa by interpolation between measurements. From a unique set of seven one-year deep pressure records between 57 degrees N and the Equator, the radiational component of S2 is shown to have similar magnitude and phase anomaly to values previously known only at coastal stations, confirming its intrinsically atmospheric forcing. From the same records, nonlinear terms in the semidiurnal band are found to be irregular and indistinguishable from noise. From the full set of data, the M4 overtide is generally small and erratic, probably affected in some areas by low-stability internal waves. The long-period tides Mm and Mf are clearly identified in the equatorial zone as coherent motions with slight phase variations. Their amplitudes are significantly greater than those deduced from the `self-consistent equilibrium theory' of Agnew & Farrell (Geophys. Jl R. astr. Soc. 55, 171-181 (1978)). The M1 tide, linearly driven from the third-degree harmonic of the potential, has been extracted from multiyear records at 13 representative coastal stations in both hemispheres. It is shown to agree well with a synthesis of normal modes of oscillation computed by Platzman (J. Phys. Oceanogr. 14 (10), 1521-1550 (1984)), provided a general phase adjustment of about 60 degrees is made to the synthesized phases. The other third-degree term M3 is well extracted from most of the pelagic stations but is found to be too finely structured in space for easy interpolation. Attempts are made to model the M3 tide from sums of normal modes and from Proudman functions (defined in section 5a) with only moderate success, owing to noisy coastal data. High solar harmonics from the atmospheric tide penetrate to the ocean bottom, and are especially noticeable at low latitudes. The ter-diurnal solar pressure close to S3 is shown to have similar spectral characteristics in midocean to that calculated from the fourth harmonic of the radiational potential of Munk & Cartwright (Phil. Trans. R. Soc. Lond. A259, 533-581 (1966)). At coastal stations, however, the S3 line itself dominates, probably because of the thermal responses of the conventional tide gauge and of shallow coastal waters. Representative diurnal and semidiurnal harmonics are mapped spatially by the `objective analysis' procedure of Sanchez et al. (Mar. Geod. 9 (1), 71-91 (1985)), with a set of basis (Proudman) functions computed for the Atlantic-plus-Indian Oceans by D. B. Rao. Data from both oceans are used in the fits but the results are probably most accurate in the Atlantic Ocean on account of the greater concentration of pelagic data. The first 100 Proudman functions out of 470 computed by Rao are found to fit the M2 data (50 for O1) optimally, without `over-fitting'. Goodness of fit is biased towards areas of greater amplitude. Most of the known features of tidal maps are reproduced fairly well, but the anti-amphidrome of M2 in the Indian Ocean has too large amplitude. Inaccuracy is attributed to the dearth of pelagic tidal data in the Indian Ocean. The same constituents are similarly mapped by empirically fitting sums of Platzman's normal mode functions to the same data. Numbers of basis functions here should, in principle, be restricted by the need for reasonably small values of (natural frequency of mode minus tidal frequency). About 20 Platzman modes give a reasonable mapping of O1, less successfully for M2, but the fits are generally less good than those from 50-100 Proudman functions. A calculation of the work done by the Moon on the Atlantic tidal field defined by the Proudman-function synthesis confirms that this parameter is the net sum of nearly cancelling positive and negative zones, and is therefore sensitive to small errors in the tidal field. In summary, there remains a mismatch between the precision and detail of tidal parameters known at a finite number of measuring points and spatial interpolations derived from present computational schemes.

  2. Multiple layer optical memory system using second-harmonic-generation readout

    DOEpatents

    Boyd, Gary T.; Shen, Yuen-Ron

    1989-01-01

    A novel optical read and write information storage system is described which comprises a radiation source such as a laser for writing and illumination, the radiation source being capable of radiating a preselected first frequency; a storage medium including at least one layer of material for receiving radiation from the radiation source and capable of being surface modified in response to said radiation source when operated in a writing mode and capable of generating a pattern of radiation of the second harmonic of the preselected frequency when illuminated by the radiation source at the preselected frequency corresponding to the surface modifications on the storage medium; and a detector to receive the pattern of second harmonic frequency generated.

  3. Adaptive sparse grid approach for the efficient simulation of pulsed eddy current testing inspections

    NASA Astrophysics Data System (ADS)

    Miorelli, Roberto; Reboud, Christophe

    2018-04-01

    Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.

  4. NTS radiological assessment project: comparison of delta-surface interpolation with kriging for the Frenchman Lake region of area 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, T.A. Jr.

    The primary objective of this report is to compare the results of delta surface interpolation with kriging on four large sets of radiological data sampled in the Frenchman Lake region at the Nevada Test Site. The results of kriging, described in Barnes, Giacomini, Reiman, and Elliott, are very similar to those using the delta surface interpolant. The other topic studied is in reducing the number of sample points and obtaining results similar to those using all of the data. The positive results here suggest that great savings of time and money can be made. Furthermore, the delta surface interpolant ismore » viewed as a contour map and as a three dimensional surface. These graphical representations help in the analysis of the large sets of radiological data.« less

  5. Controlling Continuous-Variable Quantum Key Distribution with Entanglement in the Middle Using Tunable Linear Optics Cloning Machines

    NASA Astrophysics Data System (ADS)

    Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying

    2017-02-01

    Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.

  6. An adaptive interpolation scheme for molecular potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Sen; Luo, Sheng-Nian

    Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10–100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are exploredviaGaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamentalmore » harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.« less

  8. Frictional-faulting model for harmonic tremor before Redoubt Volcano eruptions

    USGS Publications Warehouse

    Dmitrieva, Ksenia; Hotovec-Ellis, Alicia J.; Prejean, Stephanie G.; Dunham, Eric M.

    2013-01-01

    Seismic unrest, indicative of subsurface magma transport and pressure changes within fluid-filled cracks and conduits, often precedes volcanic eruptions. An intriguing form of volcano seismicity is harmonic tremor, that is, sustained vibrations in the range of 0.5–5 Hz. Many source processes can generate harmonic tremor. Harmonic tremor in the 2009 eruption of Redoubt Volcano, Alaska, has been linked to repeating earthquakes of magnitudes around 0.5–1.5 that occur a few kilometres beneath the vent. Before many explosions in that eruption, these small earthquakes occurred in such rapid succession—up to 30 events per second—that distinct seismic wave arrivals blurred into continuous, high-frequency tremor. Tremor abruptly ceased about 30 s before the explosions. Here we introduce a frictional-faulting model to evaluate the credibility and implications of this tremor mechanism. We find that the fault stressing rates rise to values ten orders of magnitude higher than in typical tectonic settings. At that point, inertial effects stabilize fault sliding and the earthquakes cease. Our model of the Redoubt Volcano observations implies that the onset of volcanic explosions is preceded by active deformation and extreme stressing within a localized region of the volcano conduit, at a depth of several kilometres.

  9. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data

    NASA Astrophysics Data System (ADS)

    Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.

    2018-03-01

    Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.

  10. Modular approach to achieving the next-generation X-ray light source

    NASA Astrophysics Data System (ADS)

    Biedron, S. G.; Milton, S. V.; Freund, H. P.

    2001-12-01

    A modular approach to the next-generation light source is described. The "modules" include photocathode, radio-frequency, electron guns and their associated drive-laser systems, linear accelerators, bunch-compression systems, seed laser systems, planar undulators, two-undulator harmonic generation schemes, high-gain harmonic generation systems, nonlinear higher harmonics, and wavelength shifting. These modules will be helpful in distributing the next-generation light source to many more laboratories than the current single-pass, high-gain free-electron laser designs permit, due to both monetary and/or physical space constraints.

  11. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  12. Use of shape-preserving interpolation methods in surface modeling

    NASA Technical Reports Server (NTRS)

    Ftitsch, F. N.

    1984-01-01

    In many large-scale scientific computations, it is necessary to use surface models based on information provided at only a finite number of points (rather than determined everywhere via an analytic formula). As an example, an equation of state (EOS) table may provide values of pressure as a function of temperature and density for a particular material. These values, while known quite accurately, are typically known only on a rectangular (but generally quite nonuniform) mesh in (T,d)-space. Thus interpolation methods are necessary to completely determine the EOS surface. The most primitive EOS interpolation scheme is bilinear interpolation. This has the advantages of depending only on local information, so that changes in data remote from a mesh element have no effect on the surface over the element, and of preserving shape information, such as monotonicity. Most scientific calculations, however, require greater smoothness. Standard higher-order interpolation schemes, such as Coons patches or bicubic splines, while providing the requisite smoothness, tend to produce surfaces that are not physically reasonable. This means that the interpolant may have bumps or wiggles that are not supported by the data. The mathematical quantification of ideas such as physically reasonable and visually pleasing is examined.

  13. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    PubMed

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  14. Moho map of South America from receiver functions and surface waves

    NASA Astrophysics Data System (ADS)

    Lloyd, Simon; van der Lee, Suzan; FrançA, George Sand; AssumpçãO, Marcelo; Feng, Mei

    2010-11-01

    We estimate crustal structure and thickness of South America north of roughly 40°S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson's ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.

  15. High-harmonic generation by two-color mixing of circularly polarized laser fields

    NASA Astrophysics Data System (ADS)

    Milošević, D. B.; Becker, W.; Kopold, R.

    2000-06-01

    Dipole selection rules prevent harmonic generation by an atom in a circularly polarized laser field. However, this is not the case for a superposition of several circularly polarized fields, such as two circularly polarized fields with frequencies ω and 2ω that corotate or counter-rotate in the same plane. Harmonic generation in this environment has been observed and, in fact, found to be very intense in the counter-rotating case [1]. In a certain frequency region, the harmonics may be stronger than those radiated in a linearly polarized field of either frequency. The selection rules dictate that the harmonics are circularly polarized with a helicity that alternates from one harmonic to the next. Besides their practical interest, these harmonics are also intriguing from a fundamental point of view: the standard simple-man picture does not apply since orbits that start with zero velocity in this field almost never return to their point of departure. In terms of quantum trajectories, we discuss the mechanism that generates these harmonics. In several interesting ways, it is complementary to the case of linear polarization. [1] H. Eichmann et al., Phys. Rev. A 51, R3414 (1995)

  16. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    NASA Astrophysics Data System (ADS)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  17. REVIEW ARTICLE: Harmonically mode-locked semiconductor-based lasers as high repetition rate ultralow noise pulse train and optical frequency comb sources

    NASA Astrophysics Data System (ADS)

    Quinlan, F.; Ozharar, S.; Gee, S.; Delfyett, P. J.

    2009-10-01

    Recent experimental work on semiconductor-based harmonically mode-locked lasers geared toward low noise applications is reviewed. Active, harmonic mode-locking of semiconductor-based lasers has proven to be an excellent way to generate 10 GHz repetition rate pulse trains with pulse-to-pulse timing jitter of only a few femtoseconds without requiring active feedback stabilization. This level of timing jitter is achieved in long fiberized ring cavities and relies upon such factors as low noise rf sources as mode-lockers, high optical power, intracavity dispersion management and intracavity phase modulation. When a high finesse etalon is placed within the optical cavity, semiconductor-based harmonically mode-locked lasers can be used as optical frequency comb sources with 10 GHz mode spacing. When active mode-locking is replaced with regenerative mode-locking, a completely self-contained comb source is created, referenced to the intracavity etalon.

  18. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  19. Waveforms for optimal sub-keV high-order harmonics with synthesized two- or three-colour laser fields.

    PubMed

    Jin, Cheng; Wang, Guoli; Wei, Hui; Le, Anh-Thu; Lin, C D

    2014-05-30

    High-order harmonics extending to the X-ray region generated in a gas medium by intense lasers offer the potential for providing tabletop broadband light sources but so far are limited by their low conversion efficiency. Here we show that harmonics can be enhanced by one to two orders of magnitude without an increase in the total laser power if the laser's waveform is optimized by synthesizing two- or three-colour fields. The harmonics thus generated are also favourably phase-matched so that radiation is efficiently built up in the gas medium. Our results, combined with the emerging intense high-repetition MHz lasers, promise to increase harmonic yields by several orders to make harmonics feasible in the near future as general bright tabletop light sources, including intense attosecond pulses.

  20. Selective harmonic elimination strategy in eleven level inverter for PV system with unbalanced DC sources

    NASA Astrophysics Data System (ADS)

    Ghoudelbourk, Sihem.; Dib, D.; Meghni, B.; Zouli, M.

    2017-02-01

    The paper deals with the multilevel converters control strategy for photovoltaic system integrated in distribution grids. The objective of the proposed work is to design multilevel inverters for solar energy applications so as to reduce the Total Harmonic Distortion (THD) and to improve the power quality. The multilevel inverter power structure plays a vital role in every aspect of the power system. It is easier to produce a high-power, high-voltage inverter with the multilevel structure. The topologies of multilevel inverter have several advantages such as high output voltage, lower total harmonic distortion (THD) and reduction of voltage ratings of the power semiconductor switching devices. The proposed control strategy ensures an implementation of selective harmonic elimination (SHE) modulation for eleven levels. SHE is a very important and efficient strategy of eliminating selected harmonics by judicious selection of the firing angles of the inverter. Harmonics elimination technique eliminates the need of the expensive low pass filters in the system. Previous research considered that constant and equal DC sources with invariant behavior; however, this research extends earlier work to include variant DC sources, which are typical of lead-acid batteries when used in system PV. This Study also investigates methods to minimize the total harmonic distortion of the synthesized multilevel waveform and to help balance the battery voltage. The harmonic elimination method was used to eliminate selected lower dominant harmonics resulting from the inverter switching action.

  1. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  2. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  3. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  4. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  5. Reduction of background clutter in structured lighting systems

    DOEpatents

    Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.

    2010-06-22

    Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.

  6. Improving emissions inventories in North America through systematic analysis of model performance during ICARTT and MILAGRO

    NASA Astrophysics Data System (ADS)

    Mena, Marcelo Andres

    During 2004 and 2006 the University of Iowa provided air quality forecast support for flight planning of the ICARTT and MILAGRO field campaigns. A method for improvement of model performance in comparison to observations is showed. The method allows identifying sources of model error from boundary conditions and emissions inventories. Simultaneous analysis of horizontal interpolation of model error and error covariance showed that error in ozone modeling is highly correlated to the error of its precursors, and that there is geographical correlation also. During ICARTT ozone modeling error was improved by updating from the National Emissions Inventory from 1999 and 2001, and furthermore by updating large point source emissions from continuous monitoring data. Further improvements were achieved by reducing area emissions of NOx y 60% for states in the Southeast United States. Ozone error was highly correlated to NOy error during this campaign. Also ozone production in the United States was most sensitive to NOx emissions. During MILAGRO model performance in terms of correlation coefficients was higher, but model error in ozone modeling was high due overestimation of NOx and VOC emissions in Mexico City during forecasting. Large model improvements were shown by decreasing NOx emissions in Mexico City by 50% and VOC by 60%. Recurring ozone error is spatially correlated to CO and NOy error. Sensitivity studies show that Mexico City aerosol can reduce regional photolysis rates by 40% and ozone formation by 5-10%. Mexico City emissions can enhance NOy and O3 concentrations over the Gulf of Mexico in up to 10-20%. Mexico City emissions can convert regional ozone production regimes from VOC to NOx limited. A method of interpolation of observations along flight tracks is shown, which can be used to infer on the direction of outflow plumes. The use of ratios such as O3/NOy and NOx/NOy can be used to provide information on chemical characteristics of the plume, such as age, and ozone production regime. Interpolated MTBE observations can be used as a tracer of urban mobile source emissions. Finally procedures for estimating and gridding emissions inventories in Brazil and Mexico are presented.

  7. Assimilation of ground and satellite snow observations in a distributed hydrologic model to improve water supply forecasts in the Upper Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Micheletty, P. D.; Day, G. N.; Quebbeman, J.; Carney, S.; Park, G. H.

    2016-12-01

    The Upper Colorado River Basin above Lake Powell is a major source of water supply for 25 million people and provides irrigation water for 3.5 million acres. Approximately 85% of the annual runoff is produced from snowmelt. Water supply forecasts of the April-July runoff produced by the National Weather Service (NWS) Colorado Basin River Forecast Center (CBRFC), are critical to basin water management. This project leverages advanced distributed models, datasets, and snow data assimilation techniques to improve operational water supply forecasts made by CBRFC in the Upper Colorado River Basin. The current work will specifically focus on improving water supply forecasts through the implementation of a snow data assimilation process coupled with the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM). Three types of observations will be used in the snow data assimilation system: satellite Snow Covered Area (MODSCAG), satellite Dust Radiative Forcing in Snow (MODDRFS), and SNOTEL Snow Water Equivalent (SWE). SNOTEL SWE provides the main source of high elevation snowpack information during the snow season, however, these point measurement sites are carefully selected to provide consistent indices of snowpack, and may not be representative of the surrounding watershed. We address this problem by transforming the SWE observations to standardized deviates and interpolating the standardized deviates using a spatial regression model. The interpolation process will also take advantage of the MODIS Snow Covered Area and Grainsize (MODSCAG) product to inform the model on the spatial distribution of snow. The interpolated standardized deviates are back-transformed and used in an Ensemble Kalman Filter (EnKF) to update the model simulated SWE. The MODIS Dust Radiative Forcing in Snow (MODDRFS) product will be used more directly through temporary adjustments to model snowmelt parameters, which should improve melt estimates in areas affected by dust on snow. In order to assess the value of different data sources, reforecasts will be produced for a historical period and performance measures will be computed to assess forecast skill. The existing CBRFC Ensemble Streamflow Prediction (ESP) reforecasts will provide a baseline for comparison to determine the added-value of the data assimilation process.

  8. Transactions of The Army Conference on Applied Mathematics and Computing (5th) Held in West Point, New York on 15-18 June 1987

    DTIC Science & Technology

    1988-03-01

    29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least

  9. Low Relative Humidity in the Atmosphere

    DTIC Science & Technology

    1989-01-01

    occur [1]. Dew points in most standard metrorological data are calculated from measurements with psychrometers to determine wet-bulb and dry-bulb...variation existed in relation to the observed data during the day. Only one entire day was missing, and an interpolation was made between the...few sporadic reports at other hours. No attempt has been made to interpolate missing observations as was done for Yuma. Because of the large number of

  10. High order cell-centered scheme totally based on cell average

    NASA Astrophysics Data System (ADS)

    Liu, Ze-Yu; Cai, Qing-Dong

    2018-05-01

    This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.

  11. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Moore, J.; Nicholson, S.; Moore, J. G.

    1985-01-01

    Research at NASA Lewis Research Center gave the opportunity to incorporate new control volumes in the Denton 3-D finite-volume time marching code. For duct flows, the new control volumes require no transverse smoothing and this allows calculations with large transverse gradients in properties without significant numerical total pressure losses. Possibilities for improving the Denton code to obtain better distributions of properties through shocks were demonstrated. Much better total pressure distributions through shocks are obtained when the interpolated effective pressure, needed to stabilize the solution procedure, is used to calculate the total pressure. This simple change largely eliminates the undershoot in total pressure down-stream of a shock. Overshoots and undershoots in total pressure can then be further reduced by a factor of 10 by adopting the effective density method, rather than the effective pressure method. Use of a Mach number dependent interpolation scheme for pressure then removes the overshoot in static pressure downstream of a shock. The stability of interpolation schemes used for the calculation of effective density is analyzed and a Mach number dependent scheme is developed, combining the advantages of the correct perfect gas equation for subsonic flow with the stability of 2-point and 3-point interpolation schemes for supersonic flow.

  12. Generation of a precise DEM by interactive synthesis of multi-temporal elevation datasets: a case study of Schirmacher Oasis, East Antarctica

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    Digital elevation model (DEM) is indispensable for analysis such as topographic feature extraction, ice sheet melting, slope stability analysis, landscape analysis and so on. Such analysis requires a highly accurate DEM. Available DEMs of Antarctic region compiled by using radar altimetry and the Antarctic digital database indicate elevation variations of up to hundreds of meters, which necessitates the generation of local improved DEM. An improved DEM of the Schirmacher Oasis, East Antarctica has been generated by synergistically fusing satellite-derived laser altimetry data from Geoscience Laser Altimetry System (GLAS), Radarsat Antarctic Mapping Project (RAMP) elevation data and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) global elevation data (GDEM). This is a characteristic attempt to generate a DEM of any part of Antarctica by fusing multiple elevation datasets, which is essential to model the ice elevation change and address the ice mass balance. We analyzed a suite of interpolation techniques for constructing a DEM from GLAS, RAMP and ASTER DEM-based point elevation datasets, in order to determine the level of confidence with which the interpolation techniques can generate a better interpolated continuous surface, and eventually improve the elevation accuracy of DEM from synergistically fused RAMP, GLAS and ASTER point elevation datasets. The DEM presented in this work has a vertical accuracy (≈ 23 m) better than RAMP DEM (≈ 57 m) and ASTER DEM (≈ 64 m) individually. The RAMP DEM and ASTER DEM elevations were corrected using differential GPS elevations as ground reference data, and the accuracy obtained after fusing multitemporal datasets is found to be improved than that of existing DEMs constructed by using RAMP or ASTER alone. This is our second attempt of fusing multitemporal, multisensory and multisource elevation data to generate a DEM of Antarctica, in order to address the ice elevation change and address the ice mass balance. Our approach focuses on the strengths of each elevation data source to produce an accurate elevation model.

  13. On the Application of Euler Deconvolution to the Analytic Signal

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Pasteka, R.

    2005-05-01

    In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.

  14. Using Social Media Derived Information to Reduce Ambiguity in Parcel Data

    NASA Astrophysics Data System (ADS)

    Sims, K.; Thakur, G.

    2017-12-01

    High-resolution spatiotemporal analyses often rely on the integration and harmonization of many unique data sources. Harmonized data can be especially useful in mobility/transportation planning, site selection/development planning, urban resiliency, sustainability, utility planning, and population modeling. However, even the most complete harmonized data sources can still possess gaps in their content, hindering their utility. For example, CoreLogic's ParcelPoint dataset is a nationwide collection of parcel points and polygons from nearly every U.S. county's local authority. While certain local land use parcel descriptions transfer easily to a national dataset, some do not, in part because of data ambiguity or regionality. This research will explore incorporating Points of Interest (POI) data derived from social media in order to reduce land use ambiguity in parcel data. Facebook, specifically, allows owners of businesses and institutions to create personalized pages with attributes like Name, Address, Location Type, Hours of Operation, Check-In counts, and designated latitude and longitude coordinates. These metadata can offer alternative land use descriptions and insights when it is otherwise not available, or when the land use associated with a parcel is not definitive. More importantly, this additional POI layer can allow for better representations of the places around us by providing a popularity and temporal aspect to the usual stagnant land use dataset. Furthermore, those responsible for emergency preparedness and response would benefit immensely from a more dynamic land use mapping opportunity. With that said, there are known limitations of social media data due to its volunteered nature. In order to recognize if the potential exists to overcome these limitations and use social-media-derived data to supplement national land use data, diverse study areas will be selected across the U.S. to yield a varied collection of POIs. Their Location Type will then be examined to create land use type parallels to the parcel land use descriptions. And finally, those POIs that intersect the parcel data will be compared to assess how the two dataset agree or disagree at ground level. Ideally, parcels with ambiguous or no absolute land use classification can be supplemented with Facebook POI descriptions.

  15. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  16. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  17. Topsoil pollution forecasting using artificial neural networks on the example of the abnormally distributed heavy metal at Russian subarctic

    NASA Astrophysics Data System (ADS)

    Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.

    2017-06-01

    Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.

  18. Comparative Study of Two InGaAs-Based Reference Radiation Thermometers

    NASA Astrophysics Data System (ADS)

    Nasibov, H.; Diril, A.; Pehlivan, O.; Kalemci, M.

    2017-07-01

    More than one decade ago, an InGaAs detector-based transfer standard infrared radiation thermometer working in the temperature range from 150 {^{circ }}\\hbox {C} to 1100 {^{circ }}\\hbox {C} was built at TUBITAK UME in the scope of collaboration with IMGC (INRIM since 2006). During this timescale, the radiation thermometer was used for the dissemination of the radiation temperature scale below the silver fixed-point temperature. Recently, a new radiation thermometer with the same design but with different spectral responsivity was constructed and employed in the laboratory. In this work, we present the comparative study of these thermometers. Furthermore, the paper describes the measurement results of the thermometer's main characteristics such as the size-of-source effect, spectral responsivity, gain ratio, and linearity. Besides, both thermometers were calibrated at the freezing temperatures of indium, tin, zinc, aluminum, and copper reference fixed-point blackbodies. The main study is focused on the impact of the spectral responsivity of thermometers on the interpolation parameters of the Sakuma-Hattori equation. Furthermore, the calibration results and the uncertainty sources are discussed in this paper.

  19. Prediction of soil attributes through interpolators in a deglaciated environment with complex landforms

    NASA Astrophysics Data System (ADS)

    Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto

    2017-04-01

    The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.

  20. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  1. High-harmonic generation in amorphous solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yong Sing; Yin, Yanchun; Wu, Yi

    High-harmonic generation in isolated atoms and molecules has been widely utilized in extreme ultraviolet photonics and attosecond pulse metrology. Recently, high-harmonic generation has been observed in solids, which could lead to important applications such as all-optical methods to image valance charge density and reconstruct electronic band structures, as well as compact extreme ultraviolet light sources. So far these studies are confined to crystalline solids; therefore, decoupling the respective roles of long-range periodicity and high density has been challenging. Here we report the observation of high-harmonic generation from amorphous fused silica. We also decouple the role of long-range periodicity by comparingmore » harmonics generated from fused silica and crystalline quartz, which contain the same atomic constituents but differ in long-range periodicity. These results advance current understanding of the strong-field processes leading to high-harmonic generation in solids with implications for the development of robust and compact extreme ultraviolet light sources.« less

  2. High-harmonic generation in amorphous solids

    DOE PAGES

    You, Yong Sing; Yin, Yanchun; Wu, Yi; ...

    2017-09-28

    High-harmonic generation in isolated atoms and molecules has been widely utilized in extreme ultraviolet photonics and attosecond pulse metrology. Recently, high-harmonic generation has been observed in solids, which could lead to important applications such as all-optical methods to image valance charge density and reconstruct electronic band structures, as well as compact extreme ultraviolet light sources. So far these studies are confined to crystalline solids; therefore, decoupling the respective roles of long-range periodicity and high density has been challenging. Here we report the observation of high-harmonic generation from amorphous fused silica. We also decouple the role of long-range periodicity by comparingmore » harmonics generated from fused silica and crystalline quartz, which contain the same atomic constituents but differ in long-range periodicity. These results advance current understanding of the strong-field processes leading to high-harmonic generation in solids with implications for the development of robust and compact extreme ultraviolet light sources.« less

  3. Optimization of pressure probe placement and data analysis of engine-inlet distortion

    NASA Astrophysics Data System (ADS)

    Walter, S. F.

    The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.

  4. An adaptive interpolation scheme for molecular potential energy surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less

  5. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object

    PubMed Central

    A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.

    2017-01-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869

  6. A possible generalization of the harmonic oscillator potential

    NASA Technical Reports Server (NTRS)

    Levai, Geza

    1995-01-01

    A four-parameter potential is analyzed, which contains the three-dimensional harmonic oscillator as a special case. This potential is exactly solvable and retains several characteristics of the harmonic oscillator, and also of the Coulomb problem. The possibility of similar generalizations of other potentials is also pointed out.

  7. Integration of Heterogenous Digital Surface Models

    NASA Astrophysics Data System (ADS)

    Boesch, R.; Ginzler, C.

    2011-08-01

    The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).

  8. Comprehensive trends assessment of nitrogen sources and loads to estuaries of the coterminous United States

    EPA Science Inventory

    Sources of nitrogen and phosphorus to estuaries and estuarine watersheds of the coterminous United States have been compiled from a variety of publically available data sources (1985 – 2015). Atmospheric loading was obtained from two sources. Modelled and interpolated meas...

  9. Low pass filter for plasma discharge

    DOEpatents

    Miller, Paul A.

    1994-01-01

    An isolator is disposed between a plasma reactor and its electrical energy source in order to isolate the reactor from the electrical energy source. The isolator operates as a filter to attenuate the transmission of harmonics of a fundamental frequency of the electrical energy source generated by the reactor from interacting with the energy source. By preventing harmonic interaction with the energy source, plasma conditions can be readily reproduced independent of the electrical characteristics of the electrical energy source and/or its associated coupling network.

  10. Generation of propagating spin waves from regions of increased dynamic demagnetising field near magnetic antidots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, C. S., E-mail: csd203@exeter.ac.uk; Kruglyak, V. V.; Sadovnikov, A. V.

    We have used Brillouin Light Scattering and micromagnetic simulations to demonstrate a point-like source of spin waves created by the inherently nonuniform internal magnetic field in the vicinity of an isolated antidot formed in a continuous film of yttrium-iron-garnet. The field nonuniformity ensures that only well-defined regions near the antidot respond in resonance to a continuous excitation of the entire sample with a harmonic microwave field. The resonantly excited parts of the sample then served as reconfigurable sources of spin waves propagating (across the considered sample) in the form of caustic beams. Our findings are relevant to further development ofmore » magnonic circuits, in which point-like spin wave stimuli could be required, and as a building block for interpretation of spin wave behavior in magnonic crystals formed by antidot arrays.« less

  11. Nonlinearly driven harmonics of Alfvén modes

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.

    2014-01-01

    In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.

  12. Generation of five phase-locked harmonics in the continuous wave regime and its potential application to arbitrary optical waveform synthesis

    NASA Astrophysics Data System (ADS)

    Suhaimi, N. Sheeda; Ohae, C.; Gavara, T.; Nakagawa, K.; Hong, F.-L.; Katsuragawa, M.

    2017-08-01

    We have successfully generated a new broadband coherent light source in the continuous wave (CW) regime which is an ensemble of multi-harmonic radiations (2403, 1201, 801, 600 and 480 nm) by implementing a frequency dividing technology. The system is uniquely designed that all the harmonics are generated and propagate coaxially which gives the advantage of robustly maintaining the phase coherence among the harmonics. The highlight is its huge potential for the arbitrary optical waveform synthesis in the CW regime which has not been performed yet due to the limitation of the existing light source.

  13. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  14. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  15. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  16. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  17. A revised ground-motion and intensity interpolation scheme for shakemap

    USGS Publications Warehouse

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  18. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  19. Biological Effects of Laser Radiation. Volume IV. Optical Second Harmonic Generation in Biological Tissues.

    DTIC Science & Technology

    1978-10-17

    characteristics for optical second- harmonic generation. The collage component of conective tissue may be the principal site for the observed harmonic...Generation in Tissue ; Second Harmonic Generation in Collage; Glutathione, 5MB; Mechanisms; Conversion Efficiency; Significance of order UL AIM UY#m~wmev...sclera, and skin on 694 im. Q-switched ruby laser irradiation. A possible source of this second-harmonic generation was tissue collagen; because of

  20. A Modified Kriging Method to Interpolate the Soil Moisture Measured by Wireless Sensor Network with the Aid of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Liu, Q.; Li, X.; Niu, H.; Cai, E.

    2015-12-01

    In recent years, wireless sensor network (WSN) emerges to collect Earth observation data at relatively low cost and light labor load, while its observations are still point-data. To learn the spatial distribution of a land surface parameter, interpolating the point data is necessary. Taking soil moisture (SM) for example, its spatial distribution is critical information for agriculture management, hydrological and ecological researches. This study developed a method to interpolate the WSN-measured SM to acquire the spatial distribution in a 5km*5km study area, located in the middle reaches of HEIHE River, western China. As SM is related to many factors such as topology, soil type, vegetation and etc., even the WSN observation grid is not dense enough to reflect the SM distribution pattern. Our idea is to revise the traditional Kriging algorithm, introducing spectral variables, i.e., vegetation index (VI) and abledo, from satellite imagery as supplementary information to aid the interpolation. Thus, the new Extended-Kriging algorithm operates on the spatial & spectral combined space. To run the algorithm, first we need to estimate the SM variance function, which is also extended to the combined space. As the number of WSN samples in the study area is not enough to gather robust statistics, we have to assume that the SM variance function is invariant over time. So, the variance function is estimated from a SM map, derived from the airborne CASI/TASI images acquired in July 10, 2012, and then applied to interpolate WSN data in that season. Data analysis indicates that the new algorithm can provide more details to the variation of land SM. Then, the Leave-one-out cross-validation is adopted to estimate the interpolation accuracy. Although a reasonable accuracy can be achieved, the result is not yet satisfactory. Besides improving the algorithm, the uncertainties in WSN measurements may also need to be controlled in our further work.

  1. High frequency sound propagation in a network of interconnecting streets

    NASA Astrophysics Data System (ADS)

    Hewett, D. P.

    2012-12-01

    We propose a new model for the propagation of acoustic energy from a time-harmonic point source through a network of interconnecting streets in the high frequency regime, in which the wavelength is small compared to typical macro-lengthscales such as street widths/lengths and building heights. Our model, which is based on geometrical acoustics (ray theory), represents the acoustic power flow from the source along any pathway through the network as the integral of a power density over the launch angle of a ray emanating from the source, and takes into account the key phenomena involved in the propagation, namely energy loss by wall absorption, energy redistribution at junctions, and, in 3D, energy loss to the atmosphere. The model predicts strongly anisotropic decay away from the source, with the power flow decaying exponentially in the number of junctions from the source, except along the axial directions of the network, where the decay is algebraic.

  2. Interpreting angular momentum transfer between electromagnetic multipoles using vector spherical harmonics.

    PubMed

    Grinter, Roger; Jones, Garth A

    2018-02-01

    The transfer of angular momentum between a quadrupole emitter and a dipole acceptor is investigated theoretically. Vector spherical harmonics are used to describe the angular part of the field of the mediating photon. Analytical results are presented for predicting angular momentum transfer between the emitter and absorber within a quantum electrodynamical framework. We interpret the allowability of such a process, which appears to violate conservation of angular momentum, in terms of the breakdown of the isotropy of space at the point of photon absorption (detection). That is, collapse of the wavefunction results in loss of all angular momentum information. This is consistent with Noether's Theorem and demystifies some common misconceptions about the nature of the photon. The results have implications for interpreting the detection of photons from multipole sources and offers insight into limits on information that can be extracted from quantum measurements in photonic systems.

  3. Heat transfer in a one-dimensional harmonic crystal in a viscous environment subjected to an external heat supply

    NASA Astrophysics Data System (ADS)

    Gavrilov, S. N.; Krivtsov, A. M.; Tsvetkov, D. V.

    2018-05-01

    We consider unsteady heat transfer in a one-dimensional harmonic crystal surrounded by a viscous environment and subjected to an external heat supply. The basic equations for the crystal particles are stated in the form of a system of stochastic differential equations. We perform a continualization procedure and derive an infinite set of linear partial differential equations for covariance variables. An exact analytic solution describing unsteady ballistic heat transfer in the crystal is obtained. It is shown that the stationary spatial profile of the kinetic temperature caused by a point source of heat supply of constant intensity is described by the Macdonald function of zero order. A comparison with the results obtained in the framework of the classical heat equation is presented. We expect that the results obtained in the paper can be verified by experiments with laser excitation of low-dimensional nanostructures.

  4. High order harmonic generation in rare gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budil, Kimberly Susan

    1994-05-01

    The process of high order harmonic generation in atomic gases has shown great promise as a method of generating extremely short wavelength radiation, extending far into the extreme ultraviolet (XUV). The process is conceptually simple. A very intense laser pulse (I ~10 13-10 14 W/cm 2) is focused into a dense (~10 17 particles/cm 3) atomic medium, causing the atoms to become polarized. These atomic dipoles are then coherently driven by the laser field and begin to radiate at odd harmonics of the laser field. This dissertation is a study of both the physical mechanism of harmonic generation as wellmore » as its development as a source of coherent XUV radiation. Recently, a semiclassical theory has been proposed which provides a simple, intuitive description of harmonic generation. In this picture the process is treated in two steps. The atom ionizes via tunneling after which its classical motion in the laser field is studied. Electron trajectories which return to the vicinity of the nucleus may recombine and emit a harmonic photon, while those which do not return will ionize. An experiment was performed to test the validity of this model wherein the trajectory of the electron as it orbits the nucleus or ion core is perturbed by driving the process with elliptically, rather than linearly, polarized laser radiation. The semiclassical theory predicts a rapid turn-off of harmonic production as the ellipticity of the driving field is increased. This decrease in harmonic production is observed experimentally and a simple quantum mechanical theory is used to model the data. The second major focus of this work was on development of the harmonic "source". A series of experiments were performed examining the spatial profiles of the harmonics. The quality of the spatial profile is crucial if the harmonics are to be used as the source for experiments, particularly if they must be refocused.« less

  5. High-order harmonic generation from a two-dimensional band structure

    NASA Astrophysics Data System (ADS)

    Jin, Jian-Zhao; Xiao, Xiang-Ru; Liang, Hao; Wang, Mu-Xue; Chen, Si-Ge; Gong, Qihuang; Peng, Liang-You

    2018-04-01

    In the past few years, harmonic generation in solids has attracted tremendous attention. Recently, some experiments of two-dimensional (2D) monolayer or few-layer materials have been carried out. These studies demonstrated that harmonic generation in the 2D case shows a strong dependence on the laser's orientation and ellipticity, which calls for a quantitative theoretical interpretation. In this work, we carry out a systematic study on the harmonic generation from a 2D band structure based on a numerical solution to the time-dependent Schrödinger equation. By comparing with the 1D case, we find that the generation dynamics can have a significant difference due to the existence of many crossing points in the 2D band structure. In particular, the higher conduction bands can be excited step by step via these crossing points and the total contribution of the harmonic is given by the mixing of transitions between different clusters of conduction bands to the valence band. We also present the orientation dependence of the harmonic yield on the laser polarization direction.

  6. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  7. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jin; Nelson, Karl E.

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  8. Horn’s Curve Estimation Through Multi-Dimensional Interpolation

    DTIC Science & Technology

    2013-03-01

    complex nature of human behavior has not yet been broached. This is not to say analysts play favorites in reaching conclusions, only that varied...Chapter III, Section 3.7. For now, it is sufficient to say underdetermined data presents technical challenges and all such datasets will be excluded from...database lookup table and then use the method of linear interpolation to instantaneously estimate the unknown points on an as-needed basis ( say from a user

  9. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE PAGES

    Yao, Jin; Nelson, Karl E.

    2018-01-24

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.

    Quantum and anharmonic effects are investigated in (H{sub 2}){sub 2}–Li{sup +}–benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H{sub 2} molecule to the H{sub 2}–Li{sup +}–benzene complex increases the ZPE of the system by 5.6 kJ mol{sup −1} to 17.6 kJ mol{sup −1}. This ZPE is 42% of the total electronic binding energymore » of (H{sub 2}){sub 2}–Li{sup +}–benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H{sub 2} to H{sub 2}–Li{sup +}–benzene is 7.7 kJ mol{sup −1}, compared to 12.4 kJ mol{sup −1} for the first H{sub 2} molecule. Anharmonicity is found to be even more important when a second (and subsequent) H{sub 2} molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H{sub 2} molecules are found at larger distance from the Li{sup +} ion and are more confined in the θ coordinate than in H{sub 2}–Li{sup +}–benzene. They also show that both H{sub 2} molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H{sub 2} molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H{sub 2}){sub 2}–Li{sup +}–benzene PESs are developed. These use a modified Shepard interpolation for the Li{sup +}–benzene and H{sub 2}–Li{sup +}–benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H{sub 2}–H{sub 2} interaction. Because of the neglect of three-body H{sub 2}, H{sub 2}, Li{sup +} terms, both fragment PESs lead to overbinding of the second H{sub 2} molecule by 1.5 kJ mol{sup −1}. Probability density histograms, however, indicate that the wavefunctions for the two H{sub 2} molecules are effectively identical on the “full” and fragment PESs. This suggests that the 1.5 kJ mol{sup −1} error is systematic over the regions of configuration space explored by our simulations. Notwithstanding this, modified Shepard interpolation of the weak H{sub 2}–H{sub 2} interaction is problematic and we obtain more accurate results, at considerably lower computational cost, using a cubic spline interpolation. Indeed, the ZPE of the fragment-with-spline PES is identical, within error, to the ZPE of the full PES. This fragmentation scheme therefore provides an accurate and inexpensive method to study higher hydrogen loading in this and similar systems.« less

  11. Modelling the Velocity Field in a Regular Grid in the Area of Poland on the Basis of the Velocities of European Permanent Stations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Kłos, Anna; Grzempowski, Piotr; Kontny, Bernard

    2014-06-01

    The paper presents the results of testing the various methods of permanent stations' velocity residua interpolation in a regular grid, which constitutes a continuous model of the velocity field in the territory of Poland. Three packages of software were used in the research from the point of view of interpolation: GMT ( The Generic Mapping Tools), Surfer and ArcGIS. The following methods were tested in the softwares: the Nearest Neighbor, Triangulation (TIN), Spline Interpolation, Surface, Inverse Distance to a Power, Minimum Curvature and Kriging. The presented research used the absolute velocities' values expressed in the ITRF2005 reference frame and the intraplate velocities related to the NUVEL model of over 300 permanent reference stations of the EPN and ASG-EUPOS networks covering the area of Europe. Interpolation for the area of Poland was done using data from the whole area of Europe to make the results at the borders of the interpolation area reliable. As a result of this research, an optimum method of such data interpolation was developed. All the mentioned methods were tested for being local or global, for the possibility to compute errors of the interpolated values, for explicitness and fidelity of the interpolation functions or the smoothing mode. In the authors' opinion, the best data interpolation method is Kriging with the linear semivariogram model run in the Surfer programme because it allows for the computation of errors in the interpolated values and it is a global method (it distorts the results in the least way). Alternately, it is acceptable to use the Minimum Curvature method. Empirical analysis of the interpolation results obtained by means of the two methods showed that the results are identical. The tests were conducted using the intraplate velocities of the European sites. Statistics in the form of computing the minimum, maximum and mean values of the interpolated North and East components of the velocity residuum were prepared for all the tested methods, and each of the resulting continuous velocity fields was visualized by means of the GMT programme. The interpolated components of the velocities and their residua are presented in the form of tables and bar diagrams.

  12. Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results

    NASA Astrophysics Data System (ADS)

    Novick, Jaison Allen

    We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source point to each "detector point". We then construct the wave function directly from these classical trajectories using the two-dimensional WKB approximation. The wave function is Fourier Transformed using a Fast Fourier Transform algorithm resulting in a spectrum in which each peak corresponds to an interpolated trajectory. Our predictions are based on an imagined experiment that uses microwave propagation within an electromagnetic waveguide. Such an experiment exploits the fact that under suitable conditions both Maxwell's Equations and the Schrodinger Equation can be reduced to the Helmholtz Equation. Therefore, our predictions, while compared to the electromagnetic experiment, contain information about the quantum system. Identifying peaks in the transmission spectrum with chaotic trajectories will allow for an additional experimental verification of the intermediate recursive structure. Finally, we summarize our results and discuss possible extensions of this project.

  13. Spectrally resolved single-shot wavefront sensing of broadband high-harmonic sources

    NASA Astrophysics Data System (ADS)

    Freisem, L.; Jansen, G. S. M.; Rudolf, D.; Eikema, K. S. E.; Witte, S.

    2018-03-01

    Wavefront sensors are an important tool to characterize coherent beams of extreme ultraviolet radiation. However, conventional Hartmann-type sensors do not allow for independent wavefront characterization of different spectral components that may be present in a beam, which limits their applicability for intrinsically broadband high-harmonic generation (HHG) sources. Here we introduce a wavefront sensor that measures the wavefronts of all the harmonics in a HHG beam in a single camera exposure. By replacing the mask apertures with transmission gratings at different orientations, we simultaneously detect harmonic wavefronts and spectra, and obtain sensitivity to spatiotemporal structure such as pulse front tilt as well. We demonstrate the capabilities of the sensor through a parallel measurement of the wavefronts of 9 harmonics in a wavelength range between 25 and 49 nm, with up to lambda/32 precision.

  14. 2.32 THz quantum cascade laser frequency-locked to the harmonic of a microwave synthesizer source.

    PubMed

    Danylov, Andriy A; Light, Alexander R; Waldman, Jerry; Erickson, Neal R; Qian, Xifeng; Goodhue, William D

    2012-12-03

    Frequency stabilization of a THz quantum cascade laser (QCL) to the harmonic of a microwave source has been accomplished using a Schottky diode waveguide mixer designed for harmonic mixing. The 2.32 THz, 1.0 milliwatt CW QCL is coupled into the signal port of the mixer and a 110 GHz signal, derived from a harmonic of a microwave synthesizer, is coupled into the IF port. The difference frequency between the 21st harmonic of 110 GHz and the QCL is used in a discriminator to adjust the QCL bias current to stabilize the frequency. The short-term frequency jitter is reduced from 550 kHz to 4.5 kHz (FWHM) and the long-term frequency drift is eliminated. This performance is compared to that of several other THz QCL frequency stabilization techniques.

  15. Harmonization of initial estimates of shale gas life cycle greenhouse gas emissions for electric power generation.

    PubMed

    Heath, Garvin A; O'Donoughue, Patrick; Arent, Douglas J; Bazilian, Morgan

    2014-08-05

    Recent technological advances in the recovery of unconventional natural gas, particularly shale gas, have served to dramatically increase domestic production and reserve estimates for the United States and internationally. This trend has led to lowered prices and increased scrutiny on production practices. Questions have been raised as to how greenhouse gas (GHG) emissions from the life cycle of shale gas production and use compares with that of conventionally produced natural gas or other fuel sources such as coal. Recent literature has come to different conclusions on this point, largely due to differing assumptions, comparison baselines, and system boundaries. Through a meta-analytical procedure we call harmonization, we develop robust, analytically consistent, and updated comparisons of estimates of life cycle GHG emissions for electricity produced from shale gas, conventionally produced natural gas, and coal. On a per-unit electrical output basis, harmonization reveals that median estimates of GHG emissions from shale gas-generated electricity are similar to those for conventional natural gas, with both approximately half that of the central tendency of coal. Sensitivity analysis on the harmonized estimates indicates that assumptions regarding liquids unloading and estimated ultimate recovery (EUR) of wells have the greatest influence on life cycle GHG emissions, whereby shale gas life cycle GHG emissions could approach the range of best-performing coal-fired generation under certain scenarios. Despite clarification of published estimates through harmonization, these initial assessments should be confirmed through methane emissions measurements at components and in the atmosphere and through better characterization of EUR and practices.

  16. Harmonization of initial estimates of shale gas life cycle greenhouse gas emissions for electric power generation

    PubMed Central

    Heath, Garvin A.; O’Donoughue, Patrick; Arent, Douglas J.; Bazilian, Morgan

    2014-01-01

    Recent technological advances in the recovery of unconventional natural gas, particularly shale gas, have served to dramatically increase domestic production and reserve estimates for the United States and internationally. This trend has led to lowered prices and increased scrutiny on production practices. Questions have been raised as to how greenhouse gas (GHG) emissions from the life cycle of shale gas production and use compares with that of conventionally produced natural gas or other fuel sources such as coal. Recent literature has come to different conclusions on this point, largely due to differing assumptions, comparison baselines, and system boundaries. Through a meta-analytical procedure we call harmonization, we develop robust, analytically consistent, and updated comparisons of estimates of life cycle GHG emissions for electricity produced from shale gas, conventionally produced natural gas, and coal. On a per-unit electrical output basis, harmonization reveals that median estimates of GHG emissions from shale gas-generated electricity are similar to those for conventional natural gas, with both approximately half that of the central tendency of coal. Sensitivity analysis on the harmonized estimates indicates that assumptions regarding liquids unloading and estimated ultimate recovery (EUR) of wells have the greatest influence on life cycle GHG emissions, whereby shale gas life cycle GHG emissions could approach the range of best-performing coal-fired generation under certain scenarios. Despite clarification of published estimates through harmonization, these initial assessments should be confirmed through methane emissions measurements at components and in the atmosphere and through better characterization of EUR and practices. PMID:25049378

  17. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  18. A projection method for coupling two-phase VOF and fluid structure interaction simulations

    NASA Astrophysics Data System (ADS)

    Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro

    2018-02-01

    The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.

  19. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    PubMed

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  20. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    PubMed Central

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626

  1. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen

    2011-08-15

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less

  2. Harmonic cavities and the transverse mode-coupling instability driven by a resistive wall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturini, M.

    The effect of rf harmonic cavities on the transverse mode-coupling instability (TMCI) is still not very well understood. We offer a fresh perspective on the problem by proposing a new numerical method for mode analysis and investigating a regime of potential interest to the new generation of light sources where resistive wall is the dominant source of transverse impedance. When the harmonic cavities are tuned for maximum flattening of the bunch profile we demonstrate that at vanishing chromaticities the transverse single-bunch motion is unstable at any current, with growth rate that in the relevant range scales as the 6th powermore » of the current. With these assumptions and radiation damping included, we find that for machine parameters typical of 4th-generation light sources the presence of harmonic cavities could reduce the instability current threshold by more than a factor two.« less

  3. Harmonic cavities and the transverse mode-coupling instability driven by a resistive wall

    DOE PAGES

    Venturini, M.

    2018-02-01

    The effect of rf harmonic cavities on the transverse mode-coupling instability (TMCI) is still not very well understood. We offer a fresh perspective on the problem by proposing a new numerical method for mode analysis and investigating a regime of potential interest to the new generation of light sources where resistive wall is the dominant source of transverse impedance. When the harmonic cavities are tuned for maximum flattening of the bunch profile we demonstrate that at vanishing chromaticities the transverse single-bunch motion is unstable at any current, with growth rate that in the relevant range scales as the 6th powermore » of the current. With these assumptions and radiation damping included, we find that for machine parameters typical of 4th-generation light sources the presence of harmonic cavities could reduce the instability current threshold by more than a factor two.« less

  4. Harmonic cavities and the transverse mode-coupling instability driven by a resistive wall

    NASA Astrophysics Data System (ADS)

    Venturini, M.

    2018-02-01

    The effect of rf harmonic cavities on the transverse mode-coupling instability (TMCI) is still not very well understood. We offer a fresh perspective on the problem by proposing a new numerical method for mode analysis and investigating a regime of potential interest to the new generation of light sources where resistive wall is the dominant source of transverse impedance. When the harmonic cavities are tuned for maximum flattening of the bunch profile we demonstrate that at vanishing chromaticities the transverse single-bunch motion is unstable at any current, with growth rate that in the relevant range scales as the 6th power of the current. With these assumptions and radiation damping included, we find that for machine parameters typical of 4th-generation light sources the presence of harmonic cavities could reduce the instability current threshold by more than a factor two.

  5. Control of Disturbing Loads in Residential and Commercial Buildings via Geometric Algebra

    PubMed Central

    2013-01-01

    Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index   δ~ associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach. PMID:24260017

  6. Control of disturbing loads in residential and commercial buildings via geometric algebra.

    PubMed

    Castilla, Manuel-V

    2013-01-01

    Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index δ(~) associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach.

  7. Quasi-supercontinuum source in the extreme ultraviolet using multiple frequency combs from high-harmonic generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wünsche, Martin; Fuchs, Silvio; Aull, Stefan

    A quasi-supercontinuum source in the extreme ultraviolet (XUV) is demonstrated using a table-top femtosecond laser and a tunable optical parametric amplifier (OPA) as a driver for high-harmonic generation (HHG). The harmonic radiation, which is usually a comb of odd multiples of the fundamental frequency, is generated by near-infrared (NIR) laser pulses from the OPA. A quasi-continuous XUV spectrum in the range of 30 to 100 eV is realized by averaging over multiple harmonic comb spectra with slightly different fundamental frequencies and thus different spectral spacing between the individual harmonics. The driving laser wavelength is swept automatically during an averaging timemore » period. With a total photon flux of 4×10 9 photons/s in the range of 30 eV to 100 eV and 1×10 7 photons/s in the range of 100 eV to 200 eV, the resulting quasi-supercontinuum XUV source is suited for applications such as XUV coherence tomography (XCT) or near-edge absorption fine structure spectroscopy (NEXAFS).« less

  8. Quasi-supercontinuum source in the extreme ultraviolet using multiple frequency combs from high-harmonic generation

    DOE PAGES

    Wünsche, Martin; Fuchs, Silvio; Aull, Stefan; ...

    2017-03-16

    A quasi-supercontinuum source in the extreme ultraviolet (XUV) is demonstrated using a table-top femtosecond laser and a tunable optical parametric amplifier (OPA) as a driver for high-harmonic generation (HHG). The harmonic radiation, which is usually a comb of odd multiples of the fundamental frequency, is generated by near-infrared (NIR) laser pulses from the OPA. A quasi-continuous XUV spectrum in the range of 30 to 100 eV is realized by averaging over multiple harmonic comb spectra with slightly different fundamental frequencies and thus different spectral spacing between the individual harmonics. The driving laser wavelength is swept automatically during an averaging timemore » period. With a total photon flux of 4×10 9 photons/s in the range of 30 eV to 100 eV and 1×10 7 photons/s in the range of 100 eV to 200 eV, the resulting quasi-supercontinuum XUV source is suited for applications such as XUV coherence tomography (XCT) or near-edge absorption fine structure spectroscopy (NEXAFS).« less

  9. Harmonic Allocation of Authorship Credit: Source-Level Correction of Bibliometric Bias Assures Accurate Publication and Citation Analysis

    PubMed Central

    Hagen, Nils T.

    2008-01-01

    Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201

  10. Assessment and Optimization of the Accuracy of an Aircraft-Based Technique Used to Quantify Greenhouse Gas Emission Rates from Point Sources

    NASA Astrophysics Data System (ADS)

    Shepson, P. B.; Lavoie, T. N.; Kerlo, A. E.; Stirm, B. H.

    2016-12-01

    Understanding the contribution of anthropogenic activities to atmospheric greenhouse gas concentrations requires an accurate characterization of emission sources. Previously, we have reported the use of a novel aircraft-based mass balance measurement technique to quantify greenhouse gas emission rates from point and area sources, however, the accuracy of this approach has not been evaluated to date. Here, an assessment of method accuracy and precision was performed by conducting a series of six aircraft-based mass balance experiments at a power plant in southern Indiana and comparing the calculated CO2 emission rates to the reported hourly emission measurements made by continuous emissions monitoring systems (CEMS) installed directly in the exhaust stacks at the facility. For all flights, CO2 emissions were quantified before CEMS data were released online to ensure unbiased analysis. Additionally, we assess the uncertainties introduced to the final emission rate caused by our analysis method, which employs a statistical kriging model to interpolate and extrapolate the CO2 fluxes across the flight transects from the ground to the top of the boundary layer. Subsequently, using the results from these flights combined with the known emissions reported by the CEMS, we perform an inter-model comparison of alternative kriging methods to evaluate the performance of the kriging approach.

  11. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  12. Speed harmonization

    DOT National Transportation Integrated Search

    2015-01-01

    Speed harmonization is a method to reduce congestion and improve traffic performance. This method is applied at points where lanes merge and form bottlenecks, the greatest cause of congestion nationwide. The strategy involves gradually lowering speed...

  13. Attenuation of harmonic noise in vibroseis data using Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Sharma, S. P.; Tildy, Peter; Iranpour, Kambiz; Scholtz, Peter

    2009-04-01

    Processing of high productivity vibroseis seismic data (such as slip-sweep acquisition records) suffers from the well known disadvantage of harmonic distortion. Harmonic distortions are observed after cross-correlation of the recorded seismic signal with the pilot sweep and affect the signals in negative time (before the actual strong reflection event). Weak reflection events of the earlier sweeps falling in the negative time window of the cross-correlation sequence are being masked by harmonic distortions. Though the amplitude of the harmonic distortion is small (up to 10-20 %) compared to the fundamental amplitude of the reflection events, but it is significant enough to mask weak reflected signals. Elimination of harmonic noise due to source signal distortion from the cross-correlated seismic trace is a challenging task since the application of vibratory sources started and it still needs improvement. An approach has been worked out that minimizes the level of harmonic distortion by designing the signal similar to the harmonic distortion. An arbitrary length filter is optimized using the Simulated Annealing global optimization approach to design a harmonic signal. The approach deals with the convolution of a ratio trace (ratio of the harmonics with respect to the fundamental sweep) with the correlated "positive time" recorded signal and an arbitrary filter. Synthetic data study has revealed that this procedure of designing a signal similar to the desired harmonics using convolution of a suitable filter with theoretical ratio of harmonics with fundamental sweep helps in reducing the problem of harmonic distortion. Once we generate a similar signal for a vibroseis source using an optimized filter, then, this filter could be used to generate harmonics, which can be subtracted from the main cross-correlated trace to get the better, undistorted image of the subsurface. Designing the predicted harmonics to reduce the energy in the trace by considering weak reflection and observed harmonics together yields the desired result (resolution of weak reflected signal from the harmonic distortion). As optimization steps proceeds forward it is possible to observe from the difference plots of desired and predicted harmonics how weak reflections evolved from the harmonic distortion gradually during later iterations of global optimization. The procedure is applied in resolving weak reflections from a number of traces considered together. For a more precise design of harmonics SA procedure needs longer computation time which is impractical to deal with voluminous seismic data. However, the objective of resolving weak reflection signal in the strong harmonic noise can be achieved with fast computation using faster cooling schedule and less number of iterations and number of moves in simulated annealing procedure. This process could help in reducing the harmonics distortion and achieving the objective of resolving the lost weak reflection events in the cross-correlated seismic traces. Acknowledgements: The research was supported under the European Marie Curie Host Fellowships for Transfer of Knowledge (TOK) Development Host Scheme (contract no. MTKD-CT-2006-042537).

  14. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Austin, Anthony P.; Trefethen, Lloyd N.

    The trigonometric interpolants to a periodic function f in equispaced points converge if f is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if f is continuous. What if the points are perturbed? With equispaced grid spacing h, let each point be perturbed by an arbitrary amount <= alpha h, where alpha is an element of[0, 1/2) is a fixed constant. The Kadec 1/4 theorem of sampling theory suggests there may be trouble for alpha >= 1/4. We show that convergence of both the interpolants and the quadrature estimates is guaranteed for all alpha < 1/2 if fmore » is twice continuously differentiable, with the convergence rate depending on the smoothness of f. More precisely, it is enough for f to have 4 alpha derivatives in a certain sense, and we conjecture that 2 alpha derivatives are enough. Connections with the Fejer-Kalmar theorem are discussed.« less

  16. A look-up-table digital predistortion technique for high-voltage power amplifiers in ultrasonic applications.

    PubMed

    Gao, Zheng; Gui, Ping

    2012-07-01

    In this paper, we present a digital predistortion technique to improve the linearity and power efficiency of a high-voltage class-AB power amplifier (PA) for ultrasound transmitters. The system is composed of a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and a field-programmable gate array (FPGA) in which the digital predistortion (DPD) algorithm is implemented. The DPD algorithm updates the error, which is the difference between the ideal signal and the attenuated distorted output signal, in the look-up table (LUT) memory during each cycle of a sinusoidal signal using the least-mean-square (LMS) algorithm. On the next signal cycle, the error data are used to equalize the signal with negative harmonic components to cancel the amplifier's nonlinear response. The algorithm also includes a linear interpolation method applied to the windowed sinusoidal signals for the B-mode and Doppler modes. The measurement test bench uses an arbitrary function generator as the DAC to generate the input signal, an oscilloscope as the ADC to capture the output waveform, and software to implement the DPD algorithm. The measurement results show that the proposed system is able to reduce the second-order harmonic distortion (HD2) by 20 dB and the third-order harmonic distortion (HD3) by 14.5 dB, while at the same time improving the power efficiency by 18%.

  17. Raster Vs. Point Cloud LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the classification results can be achieved by using the proposed approach.

  18. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  19. Quantum dynamics of a plane pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leibscher, Monika; Schmidt, Burkhard

    A semianalytical approach to the quantum dynamics of a plane pendulum is developed, based on Mathieu functions which appear as stationary wave functions. The time-dependent Schroedinger equation is solved for pendular analogs of coherent and squeezed states of a harmonic oscillator, induced by instantaneous changes of the periodic potential energy function. Coherent pendular states are discussed between the harmonic limit for small displacements and the inverted pendulum limit, while squeezed pendular states are shown to interpolate between vibrational and free rotational motion. In the latter case, full and fractional revivals as well as spatiotemporal structures in the time evolution ofmore » the probability densities (quantum carpets) are quantitatively analyzed. Corresponding expressions for the mean orientation are derived in terms of Mathieu functions in time. For periodic double well potentials, different revival schemes, and different quantum carpets are found for the even and odd initial states forming the ground tunneling doublet. Time evolution of the mean alignment allows the separation of states with different parity. Implications for external (rotational) and internal (torsional) motion of molecules induced by intense laser fields are discussed.« less

  20. Normal modes of the shallow water system on the cubed sphere

    NASA Astrophysics Data System (ADS)

    Kang, H. G.; Cheong, H. B.; Lee, C. H.

    2017-12-01

    Spherical harmonics expressed as the Rossby-Haurwitz waves are the normal modes of non-divergent barotropic model. Among the normal modes in the numerical models, the most unstable mode will contaminate the numerical results, and therefore the investigation of normal mode for a given grid system and a discretiztaion method is important. The cubed-sphere grid which consists of six identical faces has been widely adopted in many atmospheric models. This grid system is non-orthogonal grid so that calculation of the normal mode is quiet challenge problem. In the present study, the normal modes of the shallow water system on the cubed sphere discretized by the spectral element method employing the Gauss-Lobatto Lagrange interpolating polynomials as orthogonal basis functions is investigated. The algebraic equations for the shallow water equation on the cubed sphere are derived, and the huge global matrix is constructed. The linear system representing the eigenvalue-eigenvector relations is solved by numerical libraries. The normal mode calculated for the several horizontal resolution and lamb parameters will be discussed and compared to the normal mode from the spherical harmonics spectral method.

  1. Statistical comparison of various interpolation algorithms for reconstructing regional grid ionospheric maps over China

    NASA Astrophysics Data System (ADS)

    Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao

    2018-07-01

    This paper presents a quantitative comparison of several widely used interpolation algorithms, i.e., Ordinary Kriging (OrK), Universal Kriging (UnK), planar fit and Inverse Distance Weighting (IDW), based on a grid-based single-shell ionosphere model over China. The experimental data were collected from the Crustal Movement Observation Network of China (CMONOC) and the International GNSS Service (IGS), covering the days of year 60-90 in 2015. The quality of these interpolation algorithms was assessed by cross-validation in terms of both the ionospheric correction performance and Single-Frequency (SF) Precise Point Positioning (PPP) accuracy on an epoch-by-epoch basis. The results indicate that the interpolation models perform better at mid-latitudes than low latitudes. For the China region, the performance of OrK and UnK is relatively better than the planar fit and IDW model for estimating ionospheric delay and positioning. In addition, the computational efficiencies of the IDW and planar fit models are better than those of OrK and UnK.

  2. Comparison of Fine Structures of Electron Cyclotron Harmonic Emissions in Aurora

    NASA Astrophysics Data System (ADS)

    Labelle, J. W.; Dundek, M.

    2015-12-01

    Recent discoveries of emissions at four and five times the electron cyclotron frequency in aurora occuring under daylit conditions motivated the modification of radio receivers at South Pole Station, Antarctica, to measure fine structure of such emissions during two consecutive austral summers, 2013-4 and 2014-5. The experiment recorded 347 emission events over 376 days of observation. The seasonal distribution of these events revealed that successively higher harmonics require higher solar zenith angles for occurrence, as expected if they are generated at locations where the upper hybrid frequency matches the cyclotron harmonic, which for higher harmonics requires higher electron densities which are associated with higher solar zenith angles. Detailed examination of 21 cases in which two harmonics occur simultaneously showed that only rarely, about ten percent of the time, are the frequencies of the fine structures of the emissions in exact integer ratio (e.g., 3:2, 4:3, or 5:4 depending on which combination of harmonics is observed). In the remaining approximately ninety percent of the cases, the higher harmonic occurred at a lower ratio than the appropriate integer ratio, as expected if the harmonics are generated independently at their separate matching conditions in the bottomside ionosphere, where the upper hybrid frequency increases with altitude while the gyroharmonics decrease with altitude. (The bottomside is the most likely source of the emissions, since from there the mode converted Z-modes have access to ground-level.) Taken together, these results suggest that the dominant mechanism for the higher harmonics is independent generation at locations where the upper hybrid frequency matches each harmonic, i.e., at a separate source altitude for each harmonic. Generation of higher harmonics through coalescence of lower harmonic waves explains at most a small minority of events.

  3. A Global Interpolation Function (GIF) boundary element code for viscous flows

    NASA Technical Reports Server (NTRS)

    Reddy, D. R.; Lafe, O.; Cheng, A. H-D.

    1995-01-01

    Using global interpolation functions (GIF's), boundary element solutions are obtained for two- and three-dimensional viscous flows. The solution is obtained in the form of a boundary integral plus a series of global basis functions. The unknown coefficients of the GIF's are determined to ensure the satisfaction of the governing equations at selected collocation points. The values of the coefficients involved in the boundary integral equations are determined by enforcing the boundary conditions. Both primitive variable and vorticity-velocity formulations are examined.

  4. Second-harmonic generation in shear wave beams with different polarizations

    NASA Astrophysics Data System (ADS)

    Spratt, Kyle S.; Ilinskii, Yurii A.; Zabolotskaya, Evgenia A.; Hamilton, Mark F.

    2015-10-01

    A coupled pair of nonlinear parabolic equations was derived by Zabolotskaya [1] that model the transverse components of the particle motion in a collimated shear wave beam propagating in an isotropic elastic solid. Like the KZK equation, the parabolic equation for shear wave beams accounts consistently for the leading order effects of diffraction, viscosity and nonlinearity. The nonlinearity includes a cubic nonlinear term that is equivalent to that present in plane shear waves, as well as a quadratic nonlinear term that is unique to diffracting beams. The work by Wochner et al. [2] considered shear wave beams with translational polarizations (linear, circular and elliptical), wherein second-order nonlinear effects vanish and the leading order nonlinear effect is third-harmonic generation by the cubic nonlinearity. The purpose of the current work is to investigate the quadratic nonlinear term present in the parabolic equation for shear wave beams by considering second-harmonic generation in Gaussian beams as a second-order nonlinear effect using standard perturbation theory. In order for second-order nonlinear effects to be present, a broader class of source polarizations must be considered that includes not only the familiar translational polarizations, but also polarizations accounting for stretching, shearing and rotation of the source plane. It is found that the polarization of the second harmonic generated by the quadratic nonlinearity is not necessarily the same as the polarization of the source-frequency beam, and we are able to derive a general analytic solution for second-harmonic generation from a Gaussian source condition that gives explicitly the relationship between the polarization of the source-frequency beam and the polarization of the second harmonic.

  5. Body vibrational spectra of metal flute models

    NASA Astrophysics Data System (ADS)

    Hurtgen, Clare M.; Lawson, Dewey T.

    2002-11-01

    For years, flutists have argued over the tonal advantages of using different precious metals for their instruments. Occasionally, scientists have entered the fray and attempted to offer an objective point of view based on experimental measurements. However, their research often involved actual instruments and performers, ignoring variations in wall thickness, craftsmanship, and human consistency. These experiments have been conducted using a variety of methods; all have concluded that wall material has no effect on tone. This paper approaches the question using simple tubular models, excited by a wind source through a fipple mouthpiece. The amplitude and phase of the harmonic components of the body vibrational signal were measured with a stereo cartridge. Results demonstrated the existence of complex patterns of wall vibrations in the vicinity of a tone hole lattice, at frequencies that match significant harmonics of the air column. Additionally, the tube wall was found to expand in a nonuniform or ''elliptical'' manner due to the asymmetry of the tone holes. While this method is somewhat removed from direct musical applications, it can provide an objective, quantitative basis for assessing the source of differences among flutes. [Work financed by two Undergraduate Research Support grants from Duke University.

  6. Elastic Network Model of a Nuclear Transport Complex

    NASA Astrophysics Data System (ADS)

    Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.

    2010-05-01

    The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.

  7. Nanoscale imaging with table-top coherent extreme ultraviolet source based on high harmonic generation

    NASA Astrophysics Data System (ADS)

    Ba Dinh, Khuong; Le, Hoang Vu; Hannaford, Peter; Van Dao, Lap

    2017-08-01

    A table-top coherent diffractive imaging experiment on a sample with biological-like characteristics using a focused narrow-bandwidth high harmonic source around 30 nm is performed. An approach involving a beam stop and a new reconstruction algorithm to enhance the quality of reconstructed the image is described.

  8. Harmonic Optimization in Voltage Source Inverter for PV Application using Heuristic Algorithms

    NASA Astrophysics Data System (ADS)

    Kandil, Shaimaa A.; Ali, A. A.; El Samahy, Adel; Wasfi, Sherif M.; Malik, O. P.

    2016-12-01

    Selective Harmonic Elimination (SHE) technique is the fundamental switching frequency scheme that is used to eliminate specific order harmonics. Its application to minimize low order harmonics in a three level inverter is proposed in this paper. The modulation strategy used here is SHEPWM and the nonlinear equations, that characterize the low order harmonics, are solved using Harmony Search Algorithm (HSA) to obtain the optimal switching angles that minimize the required harmonics and maintain the fundamental at the desired value. Total Harmonic Distortion (THD) of the output voltage is minimized maintaining selected harmonics within allowable limits. A comparison has been drawn between HSA, Genetic Algorithm (GA) and Newton Raphson (NR) technique using MATLAB software to determine the effectiveness of getting optimized switching angles.

  9. Improved heating efficiency with High-Intensity Focused Ultrasound using a new ultrasound source excitation.

    PubMed

    Bigelow, Timothy A

    2009-01-01

    High-Intensity Focused Ultrasound (HIFU) is quickly becoming one of the best methods to thermally ablate tissue noninvasively. Unlike RF or Laser ablation, the tissue can be destroyed without inserting any probes into the body minimizing the risk of secondary complications such as infections. In this study, the heating efficiency of HIFU sources is improved by altering the excitation of the ultrasound source to take advantage of nonlinear propagation. For ultrasound, the phase velocity of the ultrasound wave depends on the amplitude of the wave resulting in the generation of higher harmonics. These higher harmonics are more efficiently converted into heat in the body due to the frequency dependence of the ultrasound absorption in tissue. In our study, the generation of the higher harmonics by nonlinear propagation is enhanced by transmitting an ultrasound wave with both the fundamental and a higher harmonic component included. Computer simulations demonstrated up to a 300% increase in temperature increase compared to transmitting at only the fundamental for the same acoustic power transmitted by the source.

  10. Dynamic Shape Reconstruction of Three-Dimensional Frame Structures Using the Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander

    2011-01-01

    A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.

  11. Method and apparatus for reducing the harmonic currents in alternating-current distribution networks

    DOEpatents

    Beverly, Leon H.; Hance, Richard D.; Kristalinski, Alexandr L.; Visser, Age T.

    1996-01-01

    An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer.

  12. Method and apparatus for reducing the harmonic currents in alternating-current distribution networks

    DOEpatents

    Beverly, L.H.; Hance, R.D.; Kristalinski, A.L.; Visser, A.T.

    1996-11-19

    An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer. 23 figs.

  13. Local spectrum analysis of field propagation in an anisotropic medium. Part II. Time-dependent fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.

  14. Structural-Thermal-Optical Program (STOP)

    NASA Technical Reports Server (NTRS)

    Lee, H. P.

    1972-01-01

    A structural thermal optical computer program is developed which uses a finite element approach and applies the Ritz method for solving heat transfer problems. Temperatures are represented at the vertices of each element and the displacements which yield deformations at any point of the heated surface are interpolated through grid points.

  15. Power transformations improve interpolation of grids for molecular mechanics interaction energies.

    PubMed

    Minh, David D L

    2018-02-18

    A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  16. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  17. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  18. On the Quality of Velocity Interpolation Schemes for Marker-In-Cell Methods on 3-D Staggered Grids

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Pusok, A. E.; Popov, A.

    2015-12-01

    The marker-in-cell method is generally considered to be a flexible and robust method to model advection of heterogenous non-diffusive properties (i.e. rock type or composition) in geodynamic problems or incompressible Stokes problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an immobile, Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without preserving the zero divergence of the velocity field at the interpolated locations (i.e. non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Jenny et al., 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. Solutions to this problem include: using larger mesh resolutions and/or marker densities, or repeatedly controlling the marker distribution (i.e. inject/delete), but which does not have an established physical background. To remedy this at low computational costs, Jenny et al. (2001) and Meyer and Jenny (2004) proposed a simple, conservative velocity interpolation (CVI) scheme for 2-D staggered grid, while Wang et al. (2015) extended the formulation to 3-D finite element methods. Here, we follow up with these studies and report on the quality of velocity interpolation methods for 2-D and 3-D staggered grids. We adapt the formulations from both Jenny et al. (2001) and Wang et al. (2015) for use on 3-D staggered grids, where the velocity components have different node locations as compared to finite element, where they share the same node location. We test the different interpolation schemes (CVI and non-CVI) in combination with different advection schemes (Euler, RK2 and RK4) and with/out marker control on Stokes problems with strong velocity gradients, which are discretized using a finite difference method. We show that a conservative formulation reduces the dispersion or clustering of markers and that the density of markers remains steady over time without the need of additional marker control. Jenny et al. (2001, J Comp Phys, 166, 218-252 Meyer and Jenny (2004), Proc Appl Math Mech, 4, 466-467 Wang et al. (2015), G3, Vol.16 Funding was provided by the ERC Starting Grant #258830.

  19. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  20. Bound state potential energy surface construction: ab initio zero-point energies and vibrationally averaged rotational constants.

    PubMed

    Bettens, Ryan P A

    2003-01-15

    Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.

  1. Adaptive Selective Harmonic Minimization Based on ANNs for Cascade Multilevel Inverters With Varying DC Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filho, Faete; Maia, Helder Z; Mateus, Tiago Henrique D

    2013-01-01

    A new approach for modulation of an 11-level cascade multilevel inverter using selective harmonic elimination is presented in this paper. The dc sources feeding the multilevel inverter are considered to be varying in time, and the switching angles are adapted to the dc source variation. This method uses genetic algorithms to obtain switching angles offline for different dc source values. Then, artificial neural networks are used to determine the switching angles that correspond to the real-time values of the dc sources for each phase. This implies that each one of the dc sources of this topology can have different valuesmore » at any time, but the output fundamental voltage will stay constant and the harmonic content will still meet the specifications. The modulating switching angles are updated at each cycle of the output fundamental voltage. This paper gives details on the method in addition to simulation and experimental results.« less

  2. Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case

    NASA Astrophysics Data System (ADS)

    Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.

    2016-11-01

    This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.

  3. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  4. Sources of image degradation in fundamental and harmonic ultrasound imaging using nonlinear, full-wave simulations.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E; Dahl, Jeremy J

    2011-04-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-andsum beamforming is used to generate point spread functions (PSF) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is reverberation from near-field structures. Reverberation clutter in the harmonic PSF is 26 dB higher than the fundamental PSF. An artificial medium with uniform velocity but unchanged impedance characteristics indicates that for the fundamental PSF, the primary source of degradation is phase aberration. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beamforming algorithm to generate images. These beamformed images are compared with images obtained from convolution of the PSF with a scatterer field to demonstrate that a very large portion of the PSF must be used to accurately represent the clutter observed in conventional imaging. © 2011 IEEE

  5. Traveling-Wave Tube Amplifier Second Harmonic as Millimeter-Wave Beacon Source for Atmospheric Propagation Studies

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Wintucky, Edwin G.

    2014-01-01

    This paper presents the design and test results of a CW millimeter-wave satellite beacon source, based on the second harmonic from a traveling-wave tube amplifier and utilizes a novel waveguide multimode directional coupler. A potential application of the beacon source is for investigating the atmospheric effects on Q-band (37 to 42 GHz) and V/W-band (71 to 76 GHz) satellite-to-ground signals.

  6. Traveling-Wave Tube Amplifier Second Harmonic as Millimeter-Wave Beacon Source for Atmospheric Propagation Studies

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Wintucky, Edwin G.

    2014-01-01

    This paper presents the design and test results of a CW millimeter-wave satellite beacon source, based on the second harmonic from a traveling-wave tube amplifier and utilizes a novel waveguide multimode directional coupler. A potential application of the beacon source is for investigating the atmospheric effects on Q-band (37-42 GHz) and V/W-band (71- 76 GHz) satellite-to-ground signals.

  7. Terrain shape estimation from optical flow, using Kalman filtering

    NASA Astrophysics Data System (ADS)

    Hoff, William A.; Sklair, Cheryl W.

    1990-01-01

    As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.

  8. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  9. THE FIRST LASING OF 193 NM SASE, 4TH HARMONIC HGHG AND ESASE AT THE NSLS SDL.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANG, X.J.; SHEN Y.; WATANABE, T.

    2006-08-28

    The first lasing of three types of single-pass high-gain FELs, SASE at 193 nm, 4th harmonic HGHG at 199 nm and ESASE at the Source Development Lab (SDL) of Brookhaven National Laboratory (BNL) is reported. The saturation of 4th harmonic HGHG and ESASE FELs was observed. We also observed the spectral broadening and instability of the 4th harmonic HGHG.

  10. Laser waveform control of extreme ultraviolet high harmonics from solids.

    PubMed

    You, Yong Sing; Wu, Mengxi; Yin, Yanchun; Chew, Andrew; Ren, Xiaoming; Gholam-Mirzaei, Shima; Browne, Dana A; Chini, Michael; Chang, Zenghu; Schafer, Kenneth J; Gaarde, Mette B; Ghimire, Shambhu

    2017-05-01

    Solid-state high-harmonic sources offer the possibility of compact, high-repetition-rate attosecond light emitters. However, the time structure of high harmonics must be characterized at the sub-cycle level. We use strong two-cycle laser pulses to directly control the time-dependent nonlinear current in single-crystal MgO, leading to the generation of extreme ultraviolet harmonics. We find that harmonics are delayed with respect to each other, yielding an atto-chirp, the value of which depends on the laser field strength. Our results provide the foundation for attosecond pulse metrology based on solid-state harmonics and a new approach to studying sub-cycle dynamics in solids.

  11. Application of cokriging techniques for the estimation of hail size

    NASA Astrophysics Data System (ADS)

    Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier

    2018-01-01

    There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.

  12. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  13. Pitch perception prior to cortical maturation

    NASA Astrophysics Data System (ADS)

    Lau, Bonnie K.

    Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.

  14. Active Control of the Forced and Transient Response of a Finite Beam. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Post, John Theodore

    1989-01-01

    When studying structural vibrations resulting from a concentrated source, many structures may be modelled as a finite beam excited by a point source. The theoretical limit on cancelling the resulting beam vibrations by utilizing another point source as an active controller is explored. Three different types of excitation are considered, harmonic, random, and transient. In each case, a cost function is defined and minimized for numerous parameter variations. For the case of harmonic excitation, the cost function is obtained by integrating the mean squared displacement over a region of the beam in which control is desired. A controller is then found to minimize this cost function in the control interval. The control interval and controller location are continuously varied for several frequencies of excitation. The results show that control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam, but control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, the cost function is realized by integrating the expected value of the displacement squared over the interval of the beam in which control is desired. This is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. A cost function representative of the beam vibration is obtained by integrating the transient displacement squared over a region of the beam and over all time. The form of the controller is chosen a priori as either one or two delayed pulses. Delays constrain the controller to be causal. The best possible control is then examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses. The two pulse controller gives better performance than a single pulse controller, but finding the optimal delay time for the additional controllers increases as the square of the number of control pulses.

  15. Precise and efficient evaluation of gravimetric quantities at arbitrarily scattered points in space

    NASA Astrophysics Data System (ADS)

    Ivanov, Kamen G.; Pavlis, Nikolaos K.; Petrushev, Pencho

    2017-12-01

    Gravimetric quantities are commonly represented in terms of high degree surface or solid spherical harmonics. After EGM2008, such expansions routinely extend to spherical harmonic degree 2190, which makes the computation of gravimetric quantities at a large number of arbitrarily scattered points in space using harmonic synthesis, a very computationally demanding process. We present here the development of an algorithm and its associated software for the efficient and precise evaluation of gravimetric quantities, represented in high degree solid spherical harmonics, at arbitrarily scattered points in the space exterior to the surface of the Earth. The new algorithm is based on representation of the quantities of interest in solid ellipsoidal harmonics and application of the tensor product trigonometric needlets. A FORTRAN implementation of this algorithm has been developed and extensively tested. The capabilities of the code are demonstrated using as examples the disturbing potential T, height anomaly ζ , gravity anomaly Δ g , gravity disturbance δ g , north-south deflection of the vertical ξ , east-west deflection of the vertical η , and the second radial derivative T_{rr} of the disturbing potential. After a pre-computational step that takes between 1 and 2 h per quantity, the current version of the software is capable of computing on a standard PC each of these quantities in the range from the surface of the Earth up to 544 km above that surface at speeds between 20,000 and 40,000 point evaluations per second, depending on the gravimetric quantity being evaluated, while the relative error does not exceed 10^{-6} and the memory (RAM) use is 9.3 GB.

  16. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  17. Sensorless speed detection of squirrel-cage induction machines using stator neutral point voltage harmonics

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Kilic, Tomislav; Terzic, Bozo

    2009-04-01

    In this paper a sensorless speed detection method of induction squirrel-cage machines is presented. This method is based on frequency determination of the stator neutral point voltage primary slot harmonic, which is dependent on rotor speed. In order to prove method in steady state and dynamic conditions the simulation and experimental study was carried out. For theoretical investigation the mathematical model of squirrel cage induction machines, which takes into consideration actual geometry and windings layout, is used. Speed-related harmonics that arise from rotor slotting are analyzed using digital signal processing and DFT algorithm with Hanning window. The performance of the method is demonstrated over a wide range of load conditions.

  18. Multilevel perspective on high-order harmonic generation in solids

    NASA Astrophysics Data System (ADS)

    Wu, Mengxi; Browne, Dana A.; Schafer, Kenneth J.; Gaarde, Mette B.

    2016-12-01

    We investigate high-order harmonic generation in a solid, modeled as a multilevel system dressed by a strong infrared laser field. We show that the cutoff energies and the relative strengths of the multiple plateaus that emerge in the harmonic spectrum can be understood both qualitatively and quantitatively by considering a combination of adiabatic and diabatic processes driven by the strong field. Such a model was recently used to interpret the multiple plateaus exhibited in harmonic spectra generated by solid argon and krypton [G. Ndabashimiye et al., Nature 534, 520 (2016), 10.1038/nature17660]. We also show that when the multilevel system originates from the Bloch state at the Γ point of the band structure, the laser-dressed states are equivalent to the Houston states [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986), 10.1103/PhysRevB.33.5494] and will therefore map out the band structure away from the Γ point as the laser field increases. This leads to a semiclassical three-step picture in momentum space that describes the high-order harmonic generation process in a solid.

  19. The Use of Geostatistics in the Study of Floral Phenology of Vulpia geniculata (L.) Link

    PubMed Central

    León Ruiz, Eduardo J.; García Mozo, Herminia; Domínguez Vilches, Eugenio; Galán, Carmen

    2012-01-01

    Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS) and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L.) Link throughout the study area during sampling season. Ten sampling points, scattered troughout the city and low mountains in the “Sierra de Córdoba” were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to ellaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps. PMID:22629169

  20. The use of geostatistics in the study of floral phenology of Vulpia geniculata (L.) link.

    PubMed

    León Ruiz, Eduardo J; García Mozo, Herminia; Domínguez Vilches, Eugenio; Galán, Carmen

    2012-01-01

    Traditionally phenology studies have been focused on changes through time, but there exist many instances in ecological research where it is necessary to interpolate among spatially stratified samples. The combined use of Geographical Information Systems (GIS) and Geostatistics can be an essential tool for spatial analysis in phenological studies. Geostatistics are a family of statistics that describe correlations through space/time and they can be used for both quantifying spatial correlation and interpolating unsampled points. In the present work, estimations based upon Geostatistics and GIS mapping have enabled the construction of spatial models that reflect phenological evolution of Vulpia geniculata (L.) Link throughout the study area during sampling season. Ten sampling points, scattered throughout the city and low mountains in the "Sierra de Córdoba" were chosen to carry out the weekly phenological monitoring during flowering season. The phenological data were interpolated by applying the traditional geostatitical method of Kriging, which was used to elaborate weekly estimations of V. geniculata phenology in unsampled areas. Finally, the application of Geostatistics and GIS to create phenological maps could be an essential complement in pollen aerobiological studies, given the increased interest in obtaining automatic aerobiological forecasting maps.

  1. Holes in the ocean: Filling voids in bathymetric lidar data

    NASA Astrophysics Data System (ADS)

    Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie

    2011-04-01

    The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.

  2. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  3. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  4. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  5. Traveling-Wave Tube Amplifier Second Harmonic as Millimeter-Wave Beacon Source for Atmospheric Propagation Studies

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Wintucky, Edwin G.

    2014-01-01

    The design and test results of a novel waveguide multimode directional coupler for a CW millimeter-wave satellite beacon source are presented. The coupler separates the second harmonic power from the fundamental output power of a traveling-wave tube amplifier. A potential application of the beacon source is for investigating the atmospheric effects on Q-band (37 to 42 GHz) and VW-band (71 to 76 GHz) satellite-to-ground signals.

  6. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  7. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    PubMed

    Liu, Wei; Du, Peijun; Wang, Dongchen

    2015-01-01

    One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  8. Programming an Artificial Neural Network Tool for Spatial Interpolation in GIS - A Case Study for Indoor Radio Wave Propagation of WLAN.

    PubMed

    Sen, Alper; Gümüsay, M Umit; Kavas, Aktül; Bulucu, Umut

    2008-09-25

    Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN.

  9. Programming an Artificial Neural Network Tool for Spatial Interpolation in GIS - A Case Study for Indoor Radio Wave Propagation of WLAN

    PubMed Central

    Şen, Alper; Gümüşay, M. Ümit; Kavas, Aktül; Bulucu, Umut

    2008-01-01

    Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN. PMID:27873854

  10. Harmonic analysis and suppression in hybrid wind & PV solar system

    NASA Astrophysics Data System (ADS)

    Gupta, Tripti; Namekar, Swapnil

    2018-04-01

    The growing demand of electricity has led to produce power through non-conventional source of energy such as solar energy, wind energy, hydro power, energy through biogas and biomass etc. Hybrid system is taken to complement the shortcoming of either sources of energy. The proposed system is grid connected hybrid wind and solar system. A 2.1 MW Doubly fed Induction Generator (DFIG) has been taken for analysis of wind farm whose rotor part is connected to two back-to-back converters. A 250 KW Photovoltaic (PV) array taken to analyze solar farm where inverter is required to convert power from DC to AC since electricity generated through solar PV is in the form of DC. Stability and reliability of the system is very important when the system is grid connected. Harmonics is the major Power quality issue which degrades the quality of power at load side. Harmonics in hybrid system arise through the use of power conversion unit. The other causes of harmonics are fluctuation in wind speed and solar irradiance. The power delivered to grid must be free from harmonics and within the limits specified by Indian grid codes. In proposed work, harmonic analysis of the hybrid system is performed in Electrical Transient Analysis program (ETAP) and single tuned harmonic filter is designed to maintain the utility grid harmonics within limits.

  11. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  12. Tutor System as a Source of Harmonizing the Educational System with the Needs of Economics

    ERIC Educational Resources Information Center

    Korsakova, Tatiana; Korsakov, Mikhail

    2017-01-01

    The purpose of this study is to identify the sources of harmonizing employers' orders in business and graduates of higher education. According to challenges posed by the economic environment the development of tutor-support system has the great potential to solve the problem. In the paper trends of modern specialists' educational preparation are…

  13. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  14. Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation

    NASA Astrophysics Data System (ADS)

    Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel

    2017-09-01

    Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.

  15. Long-term operation of surface high-harmonic generation from relativistic oscillating mirrors using a spooling tape

    DOE PAGES

    Bierbach, Jana; Yeung, Mark; Eckner, Erich; ...

    2015-05-01

    Surface high-harmonic generation in the relativistic regime is demonstrated as a source of extreme ultra-violet (XUV) pulses with extended operation time. Relativistic high-harmonic generation is driven by a frequency-doubled high-power Ti:Sapphire laser focused to a peak intensity of 3·1019 W/cm2 onto spooling tapes. We demonstrate continuous operation over up to one hour runtime at a repetition rate of 1 Hz. Harmonic spectra ranging from 20 eV to 70 eV (62 nm to 18 nm) were consecutively recorded by an XUV spectrometer. An average XUV pulse energy in the µJ range is measured. With the presented setup, relativistic surface high-harmonic generationmore » becomes a powerful source of coherent XUV pulses that might enable applications in, e.g. attosecond laser physics and the seeding of free-electron lasers, when the laser issues causing 80-% pulse energy fluctuations are overcome.« less

  16. GMI-IPS: Python Processing Software for Aircraft Campaigns

    NASA Technical Reports Server (NTRS)

    Damon, M. R.; Strode, S. A.; Steenrod, S. D.; Prather, M. J.

    2018-01-01

    NASA's Atmospheric Tomography Mission (ATom) seeks to understand the impact of anthropogenic air pollution on gases in the Earth's atmosphere. Four flight campaigns are being deployed on a seasonal basis to establish a continuous global-scale data set intended to improve the representation of chemically reactive gases in global atmospheric chemistry models. The Global Modeling Initiative (GMI), is creating chemical transport simulations on a global scale for each of the ATom flight campaigns. To meet the computational demands required to translate the GMI simulation data to grids associated with the flights from the ATom campaigns, the GMI ICARTT Processing Software (GMI-IPS) has been developed and is providing key functionality for data processing and analysis in this ongoing effort. The GMI-IPS is written in Python and provides computational kernels for data interpolation and visualization tasks on GMI simulation data. A key feature of the GMI-IPS, is its ability to read ICARTT files, a text-based file format for airborne instrument data, and extract the required flight information that defines regional and temporal grid parameters associated with an ATom flight. Perhaps most importantly, the GMI-IPS creates ICARTT files containing GMI simulated data, which are used in collaboration with ATom instrument teams and other modeling groups. The initial main task of the GMI-IPS is to interpolate GMI model data to the finer temporal resolution (1-10 seconds) of a given flight. The model data includes basic fields such as temperature and pressure, but the main focus of this effort is to provide species concentrations of chemical gases for ATom flights. The software, which uses parallel computation techniques for data intensive tasks, linearly interpolates each of the model fields to the time resolution of the flight. The temporally interpolated data is then saved to disk, and is used to create additional derived quantities. In order to translate the GMI model data to the spatial grid of the flight path as defined by the pressure, latitude, and longitude points at each flight time record, a weighted average is then calculated from the nearest neighbors in two dimensions (latitude, longitude). Using SciPya's Regular Grid Interpolator, interpolation functions are generated for the GMI model grid and the calculated weighted averages. The flight path points are then extracted from the ATom ICARTT instrument file, and are sent to the multi-dimensional interpolating functions to generate GMI field quantities along the spatial path of the flight. The interpolated field quantities are then written to a ICARTT data file, which is stored for further manipulation. The GMI-IPS is aware of a generic ATom ICARTT header format, containing basic information for all flight campaigns. The GMI-IPS includes logic to edit metadata for the derived field quantities, as well as modify the generic header data such as processing dates and associated instrument files. The ICARTT interpolated data is then appended to the modified header data, and the ICARTT processing is complete for the given flight and ready for collaboration. The output ICARTT data adheres to the ICARTT file format standards V1.1. The visualization component of the GMI-IPS uses Matplotlib extensively and has several functions ranging in complexity. First, it creates a model background curtain for the flight (time versus model eta levels) with the interpolated flight data superimposed on the curtain. Secondly, it creates a time-series plot of the interpolated flight data. Lastly, the visualization component creates averaged 2D model slices (longitude versus latitude) with overlaid flight track circles at key pressure levels. The GMI-IPS consists of a handful of classes and supporting functionality that have been generalized to be compatible with any ICARTT file that adheres to the base class definition. The base class represents a generic ICARTT entry, only defining a single time entry and 3D spatial positioning parameters. Other classes inherit from this base class; several classes for input ICARTT instrument files, which contain the necessary flight positioning information as a basis for data processing, as well as other classes for output ICARTT files, which contain the interpolated model data. Utility classes provide functionality for routine procedures such as: comparing field names among ICARTT files, reading ICARTT entries from a data file and storing them in data structures, and returning a reduced spatial grid based on a collection of ICARTT entries. Although the GMI-IPS is compatible with GMI model data, it can be adapted with reasonable effort for any simulation that creates Hierarchical Data Format (HDF) files. The same can be said of its adaptability to ICARTT files outside of the context of the ATom mission. The GMI-IPS contains just under 30,000 lines of code, eight classes, and a dozen drivers and utility programs. It is maintained with GIT source code management and has been used to deliver processed GMI model data for the ATom campaigns that have taken place to date.

  17. Interpolated memory tests reduce mind wandering and improve learning of online lectures.

    PubMed

    Szpunar, Karl K; Khan, Novall Y; Schacter, Daniel L

    2013-04-16

    The recent emergence and popularity of online educational resources brings with it challenges for educators to optimize the dissemination of online content. Here we provide evidence that points toward a solution for the difficulty that students frequently report in sustaining attention to online lectures over extended periods. In two experiments, we demonstrate that the simple act of interpolating online lectures with memory tests can help students sustain attention to lecture content in a manner that discourages task-irrelevant mind wandering activities, encourages task-relevant note-taking activities, and improves learning. Importantly, frequent testing was associated with reduced anxiety toward a final cumulative test and also with reductions in subjective estimates of cognitive demand. Our findings suggest a potentially key role for interpolated testing in the development and dissemination of online educational content.

  18. On piecewise interpolation techniques for estimating solar radiation missing values in Kedah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu

    2014-12-04

    This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less

  19. Interpolated memory tests reduce mind wandering and improve learning of online lectures

    PubMed Central

    Szpunar, Karl K.; Khan, Novall Y.; Schacter, Daniel L.

    2013-01-01

    The recent emergence and popularity of online educational resources brings with it challenges for educators to optimize the dissemination of online content. Here we provide evidence that points toward a solution for the difficulty that students frequently report in sustaining attention to online lectures over extended periods. In two experiments, we demonstrate that the simple act of interpolating online lectures with memory tests can help students sustain attention to lecture content in a manner that discourages task-irrelevant mind wandering activities, encourages task-relevant note-taking activities, and improves learning. Importantly, frequent testing was associated with reduced anxiety toward a final cumulative test and also with reductions in subjective estimates of cognitive demand. Our findings suggest a potentially key role for interpolated testing in the development and dissemination of online educational content. PMID:23576743

  20. The Atmospheric Data Acquisition And Interpolation Process For Center-TRACON Automation System

    NASA Technical Reports Server (NTRS)

    Jardin, M. R.; Erzberger, H.; Denery, Dallas G. (Technical Monitor)

    1995-01-01

    The Center-TRACON Automation System (CTAS), an advanced new air traffic automation program, requires knowledge of spatial and temporal atmospheric conditions such as the wind speed and direction, the temperature and the pressure in order to accurately predict aircraft trajectories. Real-time atmospheric data is available in a grid format so that CTAS must interpolate between the grid points to estimate the atmospheric parameter values. The atmospheric data grid is generally not in the same coordinate system as that used by CTAS so that coordinate conversions are required. Both the interpolation and coordinate conversion processes can introduce errors into the atmospheric data and reduce interpolation accuracy. More accurate algorithms may be computationally expensive or may require a prohibitively large amount of data storage capacity so that trade-offs must be made between accuracy and the available computational and data storage resources. The atmospheric data acquisition and processing employed by CTAS will be outlined in this report. The effects of atmospheric data processing on CTAS trajectory prediction will also be analyzed, and several examples of the trajectory prediction process will be given.

  1. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  2. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  3. Mathematical modeling and statistical analysis of SPE-OCDMA systems utilizing second harmonic generation effect in thick crystal receivers

    NASA Astrophysics Data System (ADS)

    Matinfar, Mehdi D.; Salehi, Jawad A.

    2009-11-01

    In this paper we analytically study and evaluate the performance of a Spectral-Phase-Encoded Optical CDMA system for different parameters such as the user's code length and the number of users in the network. In this system an advanced receiver structure in which the Second Harmonic Generation effect imposed in a thick crystal is employed as the nonlinear pre-processor prior to the conventional low speed photodetector. We consider ASE noise of the optical amplifiers, effective in low power conditions, besides the multiple access interference (MAI) noise which is the dominant source of noise in any OCDMA communications system. We use the results of the previous work which we analyzed the statistical behavior of the thick crystals in an optically amplified digital lightwave communication system to evaluate the performance of the SPE-OCDMA system with thick crystals receiver structure. The error probability is evaluated using Saddle-Point approximation and the approximation is verified by Monte-Carlo simulation.

  4. Single-Phase Single-Stage Grid Tied Solar PV System with Active Power Filtering Using Power Balance Theory

    NASA Astrophysics Data System (ADS)

    Singh, Yashi; Hussain, Ikhlaq; Singh, Bhim; Mishra, Sukumar

    2018-06-01

    In this paper, power quality features such as harmonics mitigation, power factor correction with active power filtering are addressed in a single-stage, single-phase solar photovoltaic (PV) grid tied system. The Power Balance Theory (PBT) with perturb and observe based maximum power point tracking algorithm is proposed for the mitigation of power quality problems in a solar PV grid tied system. The solar PV array is interfaced to a single phase AC grid through a Voltage Source Converter (VSC), which provides active power flow from a solar PV array to the grid as well as to the load and it performs harmonics mitigation using PBT based control. The solar PV array power varies with sunlight and due to this, the solar PV grid tied VSC works only 8-10 h per day. At night, when PV power is zero, the VSC works as an active power filter for power quality improvement, and the load active power is delivered by the grid to the load connected at the point of common coupling. This increases the effective utilization of a VSC. The system is modelled and simulated using MATLAB and simulated responses of the system at nonlinear loads and varying environmental conditions are also validated experimentally on a prototype developed in the laboratory.

  5. Single-Phase Single-Stage Grid Tied Solar PV System with Active Power Filtering Using Power Balance Theory

    NASA Astrophysics Data System (ADS)

    Singh, Yashi; Hussain, Ikhlaq; Singh, Bhim; Mishra, Sukumar

    2018-03-01

    In this paper, power quality features such as harmonics mitigation, power factor correction with active power filtering are addressed in a single-stage, single-phase solar photovoltaic (PV) grid tied system. The Power Balance Theory (PBT) with perturb and observe based maximum power point tracking algorithm is proposed for the mitigation of power quality problems in a solar PV grid tied system. The solar PV array is interfaced to a single phase AC grid through a Voltage Source Converter (VSC), which provides active power flow from a solar PV array to the grid as well as to the load and it performs harmonics mitigation using PBT based control. The solar PV array power varies with sunlight and due to this, the solar PV grid tied VSC works only 8-10 h per day. At night, when PV power is zero, the VSC works as an active power filter for power quality improvement, and the load active power is delivered by the grid to the load connected at the point of common coupling. This increases the effective utilization of a VSC. The system is modelled and simulated using MATLAB and simulated responses of the system at nonlinear loads and varying environmental conditions are also validated experimentally on a prototype developed in the laboratory.

  6. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  7. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  8. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  9. An investigation of in-flight near-field propeller noise generation and transmission

    NASA Astrophysics Data System (ADS)

    Bonneau, H.; Wilford, D. F.; Wood, L. K.

    1985-02-01

    In flight near field propeller noise measurements, made on a General Aviation turboprop aircraft, are reported for a range of propeller operating conditions, and are shown to be well defined and reproducible. Measurements have been made at 8 exterior microphones, 2 located on a wing mounted boom, and 6 embedded in, and flush with the aircraft fuselage. Interior noise levels are also presented. Measured propeller harmonic levels are compared to first principle calculations of near field noise, using a modified version of the Farassat computer program, in which the blade surface pressure is described using the known aerodynamic properties of the blade (NACA 16) airfoil sections. The first few; i.e., the dominant harmonic levels of propeller noise are shown to be well predicted, while higher harmonic levels are underpredicted. The transmission loss between exterior and interior noise levels is shown to be relatively constant for varying propeller operating conditions and at two different locations along the length of the fuselage. Interior noise levels are also shown for the aircraft in gliding flight at various forward velocities, with both engines at idle and propellers feathered. A method of interpolating these measurements is discussed, which allows the interior noise due only to the forward velocity of the aircraft, to be determined. The transmission loss for this component is also discussed. Finally, interior noise levels are presented for a series of ground static tests with engine mounts of various different stiffnessses.

  10. Source of the Kerr-Newman solution as a gravitating bag model: 50 years of the problem of the source of the Kerr solution

    NASA Astrophysics Data System (ADS)

    Burinskii, Alexander

    2016-01-01

    It is known that gravitational and electromagnetic fields of an electron are described by the ultra-extreme Kerr-Newman (KN) black hole solution with extremely high spin/mass ratio. This solution is singular and has a topological defect, the Kerr singular ring, which may be regularized by introducing the solitonic source based on the Higgs mechanism of symmetry breaking. The source represents a domain wall bubble interpolating between the flat region inside the bubble and external KN solution. It was shown recently that the source represents a supersymmetric bag model, and its structure is unambiguously determined by Bogomolnyi equations. The Dirac equation is embedded inside the bag consistently with twistor structure of the Kerr geometry, and acquires the mass from the Yukawa coupling with Higgs field. The KN bag turns out to be flexible, and for parameters of an electron, it takes the form of very thin disk with a circular string placed along sharp boundary of the disk. Excitation of this string by a traveling wave creates a circulating singular pole, indicating that the bag-like source of KN solution unifies the dressed and point-like electron in a single bag-string-quark system.

  11. LCS-1: a high-resolution global model of the lithospheric magnetic field derived from CHAMP and Swarm satellite observations

    NASA Astrophysics Data System (ADS)

    Olsen, Nils; Ravat, Dhananjay; Finlay, Christopher C.; Kother, Livia K.

    2017-12-01

    We derive a new model, named LCS-1, of Earth's lithospheric field based on four years (2006 September-2010 September) of magnetic observations taken by the CHAMP satellite at altitudes lower than 350 km, as well as almost three years (2014 April-2016 December) of measurements taken by the two lower Swarm satellites Alpha and Charlie. The model is determined entirely from magnetic 'gradient' data (approximated by finite differences): the north-south gradient is approximated by first differences of 15 s along-track data (for CHAMP and each of the two Swarm satellites), while the east-west gradient is approximated by the difference between observations taken by Swarm Alpha and Charlie. In total, we used 6.2 mio data points. The model is parametrized by 35 000 equivalent point sources located on an almost equal-area grid at a depth of 100 km below the surface (WGS84 ellipsoid). The amplitudes of these point sources are determined by minimizing the misfit to the magnetic satellite 'gradient' data together with the global average of |Br| at the ellipsoid surface (i.e. applying an L1 model regularization of Br). In a final step, we transform the point-source representation to a spherical harmonic expansion. The model shows very good agreement with previous satellite-derived lithospheric field models at low degree (degree correlation above 0.8 for degrees n ≤ 133). Comparison with independent near-surface aeromagnetic data from Australia yields good agreement (coherence >0.55) at horizontal wavelengths down to at least 250 km, corresponding to spherical harmonic degree n ≈ 160. The LCS-1 vertical component and field intensity anomaly maps at Earth's surface show similar features to those exhibited by the WDMAM2 and EMM2015 lithospheric field models truncated at degree 185 in regions where they include near-surface data and provide unprecedented detail where they do not. Example regions of improvement include the Bangui anomaly region in central Africa, the west African cratons, the East African Rift region, the Bay of Bengal, the southern 90°E ridge, the Cretaceous quiet zone south of the Walvis Ridge and the younger parts of the South Atlantic.

  12. Spatial interpolation of hourly precipitation and dew point temperature for the identification of precipitation phase and hydrologic response in a mountainous catchment

    NASA Astrophysics Data System (ADS)

    Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.

    2012-12-01

    In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing so to support an accurate determination of precipitation phase for assessing catchment hydrologic response to the storm.

  13. The design of a multi-harmonic step-tunable gyrotron

    NASA Astrophysics Data System (ADS)

    Qi, Xiang-Bo; Du, Chao-Hai; Zhu, Juan-Feng; Pan, Shi; Liu, Pu-Kun

    2017-03-01

    The theoretical study of a step-tunable gyrotron controlled by successive excitation of multi-harmonic modes is presented in this paper. An axis-encircling electron beam is employed to eliminate the harmonic mode competition. Physics images are depicted to elaborate the multi-harmonic interaction mechanism in determining the operating parameters at which arbitrary harmonic tuning can be realized by magnetic field sweeping to achieve controlled multiband frequencies' radiation. An important principle is revealed that a weak coupling coefficient under a high-harmonic interaction can be compensated by a high Q-factor. To some extent, the complementation between the high Q-factor and weak coupling coefficient makes the high-harmonic mode potential to achieve high efficiency. Based on a previous optimized magnetic cusp gun, the multi-harmonic step-tunable gyrotron is feasible by using harmonic tuning of first-to-fourth harmonic modes. Multimode simulation shows that the multi-harmonic gyrotron can operate on the 34 GHz first-harmonic TE11 mode, 54 GHz second-harmonic TE21 mode, 74 GHz third-harmonic TE31 mode, and 94 GHz fourth-harmonic TE41 mode, corresponding to peak efficiencies of 28.6%, 35.7%, 17.1%, and 11.4%, respectively. The multi-harmonic step-tunable gyrotron provides new possibilities in millimeter-terahertz source development especially for advanced terahertz applications.

  14. Rigid-Cluster Models of Conformational Transitions in Macromolecular Machines and Assemblies

    PubMed Central

    Kim, Moon K.; Jernigan, Robert L.; Chirikjian, Gregory S.

    2005-01-01

    We present a rigid-body-based technique (called rigid-cluster elastic network interpolation) to generate feasible transition pathways between two distinct conformations of a macromolecular assembly. Many biological molecules and assemblies consist of domains which act more or less as rigid bodies during large conformational changes. These collective motions are thought to be strongly related with the functions of a system. This fact encourages us to simply model a macromolecule or assembly as a set of rigid bodies which are interconnected with distance constraints. In previous articles, we developed coarse-grained elastic network interpolation (ENI) in which, for example, only Cα atoms are selected as representatives in each residue of a protein. We interpolate distance differences of two conformations in ENI by using a simple quadratic cost function, and the feasible conformations are generated without steric conflicts. Rigid-cluster interpolation is an extension of the ENI method with rigid-clusters replacing point masses. Now the intermediate conformations in an anharmonic pathway can be determined by the translational and rotational displacements of large clusters in such a way that distance constraints are observed. We present the derivation of the rigid-cluster model and apply it to a variety of macromolecular assemblies. Rigid-cluster ENI is then modified for a hybrid model represented by a mixture of rigid clusters and point masses. Simulation results show that both rigid-cluster and hybrid ENI methods generate sterically feasible pathways of large systems in a very short time. For example, the HK97 virus capsid is an icosahedral symmetric assembly composed of 60 identical asymmetric units. Its original Hessian matrix size for a Cα coarse-grained model is >(300,000)2. However, it reduces to (84)2 when we apply the rigid-cluster model with icosahedral symmetry constraints. The computational cost of the interpolation no longer scales heavily with the size of structures; instead, it depends strongly on the minimal number of rigid clusters into which the system can be decomposed. PMID:15833998

  15. Can even-order laser harmonics exhibited by Bohmian trajectories in symmetric potentials be observed?

    PubMed

    Peatross, J; Johansen, J

    2014-01-13

    Strong-field laser-atom interactions provide extreme conditions that may be useful for investigating the de Broglie-Bohm quantum interpretation. Bohmian trajectories representing bound electrons in individual atoms exhibit both even and odd harmonic motion when subjected to a strong external laser field. The phases of the even harmonics depend on the random initial positions of the trajectories within the wave function, making the even harmonics incoherent. In contrast, the phases of odd harmonics remain for the most part coherent regardless of initial position. Under the conjecture that a Bohmian point particle plays the role of emitter, this suggests an experiment to determine whether both even and odd harmonics are produced at the atomic level. Estimates suggest that incoherent emission of even harmonics may be detectable out the side of an intense laser focus interacting with a large number of atoms.

  16. Coherent soft X-ray high-order harmonics using tight-focusing laser pulses in the gas mixture.

    PubMed

    Lu, Faming; Xia, Yuanqin; Zhang, Sheng; Chen, Deying; Zhao, Yang; Liu, Bin

    2014-01-01

    We experimentally study the harmonics from a Xe-He gas mixture using tight-focusing femtosecond laser pulses. The spectrum in the mixed gases exhibits an extended cutoff region from the harmonic H21 to H27. The potential explanation is that the harmonics photons from Xe contribute the electrons of He atoms to transmit into the excited-state. Therefore, the harmonics are emitted from He atoms easily. Furthermore, we show that there are the suppressed harmonics H15 and H17 in the mixed gases. The underlying mechanism is the destructive interference between harmonics generated from different atoms. Our results indicate that HHG from Xe-He gas mixture is an efficient method of obtaining the coherent soft X-ray source.

  17. Solution of effective Hamiltonian of impurity hopping between two sites in a metal

    NASA Astrophysics Data System (ADS)

    Ye, Jinwu

    1998-03-01

    We analyze in detail all the possible fixed points of the effective Hamiltonian of a non-magnetic impurity hopping between two sites in a metal obtained by Moustakas and Fisher(MF). We find a line of non-fermi liquid fixed points which continuously interpolates between the 2-channel Kondo fixed point(2CK) and the one channel, two impurity Kondo (2IK) fixed point. There is one relevant direction with scaling dimension 1/2 and one leading irrelevant operator with dimension 3/2. There is also one marginal operator in the spin sector moving along this line. The additional non-fermi liquid fixed point found by MF has the same symmetry as the 2IK, it has two relevant directions with scaling dimension 1/2, therefore also unstable. The system is shown to flow to a line of fermi-liquid fixed points which continuously interpolates between the non-interacting fixed point and the 2 channel spin-flavor Kondo fixed point (2CSFK) discussed by the author previously. The effect of particle-hole symmetry breaking is discussed. The effective Hamiltonian in the external magnetic field is analysed. The scaling functions for the physical measurable quantities are derived in the different regimes; their predictions for the experiments are given. Finally the implications are given for a non-magnetic impurity hopping around three sites with triangular symmetry discussed by MF.

  18. On a focal point instability in (B3Πg - C3Πu)N2 optogalvanic circuit with hollow cathode

    NASA Astrophysics Data System (ADS)

    Gencheva, V.

    2016-03-01

    The (B3Πg, v = 0 - C3 Πu, v = 0) N2 dynamic optogalvanic signals have been registered illuminating an Al hollow cathode lamp with a pulsed N2 laser generating at the wavelength of 337.1nm. The dynamic optogalvanic signal (DOGS) at certain discharge current of 8 mA is a harmonic oscillator due to a focal point instability produced by our optogalvanic circuit. This damped harmonic oscillator can be described as a solution of linear second order homogeneous differential equation. The oscillation frequency is estimated from the registered DOGS using Fourier synthesis. The analytical description of the damped harmonic DOGS is obtained.

  19. Life test failure of harmonic gears in a Two-axis Gimbal for the Mars Reconnaissance Orbiter Spacecraft

    NASA Technical Reports Server (NTRS)

    Johnson, Michael R.; Gehling, Russ; Head, Ray

    2006-01-01

    This paper will present a process for increasing the stiffness of harmonic gear assemblies and recommend a maximum stiffness point that, if exceeded, compromises the reliability of the gear components for long life applications.

  20. Comparison of air-kerma strength determinations for HDR (192)Ir sources.

    PubMed

    Rasmussen, Brian E; Davis, Stephen D; Schmidt, Cal R; Micka, John A; Dewerd, Larry A

    2011-12-01

    To perform a comparison of the interim air-kerma strength standard for high dose rate (HDR) (192)Ir brachytherapy sources maintained by the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) with measurements of the various source models using modified techniques from the literature. The current interim standard was established by Goetsch et al. in 1991 and has remained unchanged to date. The improved, laser-aligned seven-distance apparatus of the University of Wisconsin Medical Radiation Research Center (UWMRRC) was used to perform air-kerma strength measurements of five different HDR (192)Ir source models. The results of these measurements were compared with those from well chambers traceable to the original standard. Alternative methodologies for interpolating the (192)Ir air-kerma calibration coefficient from the NIST air-kerma standards at (137)Cs and 250 kVp x rays (M250) were investigated and intercompared. As part of the interpolation method comparison, the Monte Carlo code EGSnrc was used to calculate updated values of A(wall) for the Exradin A3 chamber used for air-kerma strength measurements. The effects of air attenuation and scatter, room scatter, as well as the solution method were investigated in detail. The average measurements when using the inverse N(K) interpolation method for the Classic Nucletron, Nucletron microSelectron, VariSource VS2000, GammaMed Plus, and Flexisource were found to be 0.47%, -0.10%, -1.13%, -0.20%, and 0.89% different than the existing standard, respectively. A further investigation of the differences observed between the sources was performed using MCNP5 Monte Carlo simulations of each source model inside a full model of an HDR 1000 Plus well chamber. Although the differences between the source models were found to be statistically significant, the equally weighted average difference between the seven-distance measurements and the well chambers was 0.01%, confirming that it is not necessary to update the current standard maintained at the UWADCL.

  1. Improved definition of crustal magnetic anomalies for MAGSAT data

    NASA Technical Reports Server (NTRS)

    Brown, R. D.; Frawley, J. F.; Davis, W. M.; Ray, R. D.; Didwall, E.; Regan, R. D. (Principal Investigator)

    1982-01-01

    The routine correction of MAGSAT vector magnetometer data for external field effects such as the ring current and the daily variation by filtering long wavelength harmonics from the data is described. Separation of fields due to low altitude sources from those caused by high altitude sources is affected by means of dual harmonic expansions in the solution of Dirichlet's problem. This regression/harmonic filter procedure is applied on an orbit by orbit basis, and initial tests on MAGSAT data from orbit 1176 show reduction in external field residuals by 24.33 nT RMS in the horizontal component, and 10.95 nT RMS in the radial component.

  2. Predicting the performance of a power amplifier using large-signal circuit simulations of an AlGaN/GaN HFET model

    NASA Astrophysics Data System (ADS)

    Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.

    2009-02-01

    We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of the 2DEG in the drain access region.

  3. Estimating the frequency interval of a regularly spaced multicomponent harmonic line signal in colored noise

    NASA Astrophysics Data System (ADS)

    Frazer, Gordon J.; Anderson, Stuart J.

    1997-10-01

    The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.

  4. Enhanced attosecond pulse generation in the vacuum ultraviolet using a two-colour driving field for high harmonic generation

    NASA Astrophysics Data System (ADS)

    Matía-Hernando, P.; Witting, T.; Walke, D. J.; Marangos, J. P.; Tisch, J. W. G.

    2018-03-01

    High-harmonic radiation in the extreme ultraviolet and soft X-ray spectral regions can be used to generate attosecond pulses and to obtain structural and dynamic information in atoms and molecules. However, these sources typically suffer from a limited photon flux. An additional issue at lower photon energies is the appearance of satellites in the time domain, stemming from insufficient temporal gating and the spectral filtering required for the isolation of attosecond pulses. Such satellites limit the temporal resolution. The use of multi-colour driving fields has been proven to enhance the harmonic yield and provide additional control, using the relative delays between the different spectral components for waveform shaping. We describe here a two-colour high-harmonic source that combines a few-cycle near-infrared pulse with a multi-cycle second harmonic pulse, with both relative phase and carrier-envelope phase stabilization. We observe strong modulations in the harmonic flux, and present simulations and experimental results supporting the suppression of satellites in sub-femtosecond pulses at 20 eV compared to the single colour field case, an important requirement for attosecond pump-probe measurements.

  5. The determination of the most applicable PWV model for Turkey

    NASA Astrophysics Data System (ADS)

    Deniz, Ilke; Gurbuz, Gokhan; Mekik, Cetin

    2016-07-01

    Water vapor is a key component for modelling atmosphere and climate studies. Moreover, long-term water vapor changes can be an independent source for detecting climate changes. Since Global Navigation Satellite Systems (GNSS) use microwaves passing through the atmosphere, atmospheric effects are modeled with high accuracy. Tropospheric effects on GNSS signals are estimated with total zenith delay parameter (ZTD) which is the sum of hydrostatic (ZHD) and wet zenith delay (ZWD). The first component can be obtained from meteorological observations with high accuracy; the second component, however, can be computed by subtracting ZHD from ZTD (ZWD=ZTD-ZHD). Afterwards, the weighted mean temperature (Tm) or the conversion factor (Q) is used for the conversion between the precipitable water vapor (PWV) and ZWD. The parameters Tm and Q are derived from the analysis of radiosonde stations' profile observations. Numerous Q and Tm models have been developed for each radiosonde station, radiosonde station group, countries and global fields such as Bevis Tm model and Emardson and Derks' Q models. So, PWV models (Tm and Q models) applied for Turkey have been developed using a year of radiosonde data (2011) from 8 radiosonde stations. In this study the models developed are tested by comparing PWVGNSS computed applying Tm and Q models to the ZTD estimates derived by Bernese and GAMIT/GLOBK software at GNSS stations established at Istanbul and Ankara with those from the collocated radiosonde stations (PWVRS) from October 2013 to December 2014 with the data obtained from a project (no 112Y350) supported by the Scientific and Technological Research Council of Turkey (TUBITAK). The comparison results show that PWVGNSS and PWVRS are in high correlation (86 % for Ankara and 90% for Istanbul). Thus, the most applicable model for Turkey and the accuracy of GNSS meteorology are investigated. In addition, Tm model was applied to the ZTD estimates of 20 TUSAGA-Active (CORS-TR) stations in the 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey and PWV were computed. ZTD estimates of these stations were computed using Bernese GNSS Software v5.0 during the period from June 2013 to June 2014. Preceding the PWV estimation, meteorological parameters for these stations (temperature, pressure and humidity) are derived by applying spherical harmonics modelling and interpolation to the above-mentioned meteorological parameters measured by meteorological stations surrounding TUSAGA-Active stations. Results of spherical harmonics modelling and interpolation yield the precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Also, the PWV of TUSAGA-Active stations selected were estimated.

  6. The Development of a Degree 360 Expansion of the Dynamic Ocean Topography of the POCM_4B Global Circulation Model

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.

    1998-01-01

    This paper documents the development of a degree 360 expansion of the dynamic ocean topography (DOT) of the POCM_4B ocean circulation model. The principles and software used that led to the final model are described. A key principle was the development of interpolated DOT values into land areas to avoid discontinuities at or near the land/ocean interface. The power spectrum of the POCM_4B is also presented with comparisons made between orthonormal (ON) and spherical harmonic magnitudes to degree 24. A merged file of ON and SH computed degree variances is proposed for applications where the DOT power spectrum from low to high (360) degrees is needed.

  7. Uncertainty relation for the discrete Fourier transform.

    PubMed

    Massar, Serge; Spindel, Philippe

    2008-05-16

    We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.

  8. FPGA-based real-time swept-source OCT systems for B-scan live-streaming or volumetric imaging

    NASA Astrophysics Data System (ADS)

    Bandi, Vinzenz; Goette, Josef; Jacomet, Marcel; von Niederhäusern, Tim; Bachmann, Adrian H.; Duelk, Marcus

    2013-03-01

    We have developed a Swept-Source Optical Coherence Tomography (Ss-OCT) system with high-speed, real-time signal processing on a commercially available Data-Acquisition (DAQ) board with a Field-Programmable Gate Array (FPGA). The Ss-OCT system simultaneously acquires OCT and k-clock reference signals at 500MS/s. From the k-clock signal of each A-scan we extract a remap vector for the k-space linearization of the OCT signal. The linear but oversampled interpolation is followed by a 2048-point FFT, additional auxiliary computations, and a data transfer to a host computer for real-time, live-streaming of B-scan or volumetric C-scan OCT visualization. We achieve a 100 kHz A-scan rate by parallelization of our hardware algorithms, which run on standard and affordable, commercially available DAQ boards. Our main development tool for signal analysis as well as for hardware synthesis is MATLAB® with add-on toolboxes and 3rd-party tools.

  9. A method for measuring the nonlinear response in dielectric spectroscopy through third harmonics detection.

    PubMed

    Thibierge, C; L'Hôte, D; Ladieu, F; Tourbot, R

    2008-10-01

    We present a high sensitivity method allowing the measurement of the nonlinear dielectric susceptibility of an insulating material at finite frequency. It has been developed for the study of dynamic heterogeneities in supercooled liquids using dielectric spectroscopy at frequencies 0.05 Hz < or = f < or = 3x10(4) Hz. It relies on the measurement of the third harmonics component of the current flowing out of a capacitor. We first show that standard laboratory electronics (amplifiers and voltage sources) nonlinearities lead to limits on the third harmonics measurements that preclude reaching the level needed by our physical goal, a ratio of the third harmonics to the fundamental signal about 10(-7). We show that reaching such a sensitivity needs a method able to get rid of the nonlinear contributions both of the measuring device (lock-in amplifier) and of the excitation voltage source. A bridge using two sources fulfills only the first of these two requirements, but allows to measure the nonlinearities of the sources. Our final method is based on a bridge with two plane capacitors characterized by different dielectric layer thicknesses. It gets rid of the source and amplifier nonlinearities because in spite of a strong frequency dependence of the capacitor impedance, it is equilibrated at any frequency. We present the first measurements of the physical nonlinear response using our method. Two extensions of the method are suggested.

  10. Two Different Squeeze Transformations

    NASA Technical Reports Server (NTRS)

    Han, D. (Editor); Kim, Y. S.

    1996-01-01

    Lorentz boosts are squeeze transformations. While these transformations are similar to those in squeezed states of light, they are fundamentally different from both physical and mathematical points of view. The difference is illustrated in terms of two coupled harmonic oscillators, and in terms of the covariant harmonic oscillator formalism.

  11. Generation of real-time mode high-resolution water vapor fields from GPS observations

    NASA Astrophysics Data System (ADS)

    Yu, Chen; Penna, Nigel T.; Li, Zhenhong

    2017-02-01

    Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.

  12. Acoustic manipulation of active spherical carriers: Generation of negative radiation force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajabi, Majid, E-mail: majid_rajabi@iust.ac.ir; Mojahed, Alireza

    2016-09-15

    This paper examines theoretically a novel mechanism of generating negative (pulling) radiation force for acoustic manipulation of spherical carriers equipped with piezoelectric actuators in its inner surface. In this mechanism, the spherical particle is handled by common plane progressive monochromatic acoustic waves instead of zero-/higher- order Bessel beams or standing waves field. The handling strategy is based on applying a spatially uniform harmonic electrical voltage at the piezoelectric actuator with the same frequency of handling acoustic waves, in order to change the radiation force effect from repulsive (away from source) to attractive (toward source). This study may be considered asmore » a start point for development of contact-free precise handling and entrapment technology of active carriers which are essential in many engineering and medicine applications.« less

  13. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  14. The analysis of composite laminated beams using a 2D interpolating meshless technique

    NASA Astrophysics Data System (ADS)

    Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.

    2018-02-01

    Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.

  15. Signal-to-noise ratio estimation using adaptive tuning on the piecewise cubic Hermite interpolation model for images.

    PubMed

    Sim, K S; Yeap, Z X; Tso, C P

    2016-11-01

    An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  16. Direct Validation of the Wall Interference Correction System of the Ames 11-Foot Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Boone, Alan R.

    2003-01-01

    Data from the test of a large semispan model was used to perform a direct validation of a wall interference correction system for a transonic slotted wall wind tunnel. At first, different sets of uncorrected aerodynamic coefficients were generated by physically changing the boundary condition of the test section walls. Then, wall interference corrections were computed and applied to all data points. Finally, an interpolation of the corrected aerodynamic coefficients was performed. This interpolation made sure that the corrected Mach number of a given run would be constant. Overall, the agreement between corresponding interpolated lift, drag, and pitching moment coefficient sets was very good. Buoyancy corrections were also investigated. These studies showed that the accuracy goal of one drag count may only be achieved if reliable estimates of the wall interference induced buoyancy correction are available during a test.

  17. Electromagnetic Compatibility of Devices on Hybrid Electromagnetic Components

    NASA Astrophysics Data System (ADS)

    Konesev, S. G.; Khazieva, R. T.; Kirillov, R. V.; Gainutdinov, I. Z.; Kondratyev, E. Y.

    2018-01-01

    There is a general tendency to reduce the weight and dimensions, the consumption of conductive and electrical insulating materials, increase the reliability and energy efficiency of electrical devices. In recent years, designers have been actively developing devices based on hybrid electromagnetic components (HEMC) such as inductive-capacitive converters (ICC), voltages pulse generators (VPG), secondary power supplies (SPS), capacitive storage devices (CSD), induction heating systems (IHS). Sources of power supplies of similar electrical devices contain, as a rule, links of increased frequency and function in key (pulse) modes, which leads to an increase in electromagnetic interference (EMI). Nonlinear and periodic (impulse) loads, non-sinusoidal (pulsation) of the electromotive force and nonlinearity of the internal parameters of the source and input circuits of consumers distort the shape of the input voltage lead to an increase in thermal losses from the higher harmonic currents, aging of the insulation, increase in the weight of the power supply filter units, resonance at higher harmonics. The most important task is to analyze the operation of electrotechnical devices based on HEMC from the point of view of creating EMIs and assessing their electromagnetic compatibility (EMC) with power supply systems (PSS). The article presents the results of research on the operation of an IHS, the operation principle of a secondary power supply source of which is based on the operation of a half-bridge autonomous inverter, the switching circuit of which is made in the form of a HEMC, called the «multifunctional integrated electromagnetic component»" (MIEC).

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierbach, Jana; Yeung, Mark; Eckner, Erich

    Surface high-harmonic generation in the relativistic regime is demonstrated as a source of extreme ultra-violet (XUV) pulses with extended operation time. Relativistic high-harmonic generation is driven by a frequency-doubled high-power Ti:Sapphire laser focused to a peak intensity of 3·1019 W/cm2 onto spooling tapes. We demonstrate continuous operation over up to one hour runtime at a repetition rate of 1 Hz. Harmonic spectra ranging from 20 eV to 70 eV (62 nm to 18 nm) were consecutively recorded by an XUV spectrometer. An average XUV pulse energy in the µJ range is measured. With the presented setup, relativistic surface high-harmonic generationmore » becomes a powerful source of coherent XUV pulses that might enable applications in, e.g. attosecond laser physics and the seeding of free-electron lasers, when the laser issues causing 80-% pulse energy fluctuations are overcome.« less

  19. Multi-MW K-Band Harmonic Multiplier: RF Source For High-Gradient Accelerator R & D

    NASA Astrophysics Data System (ADS)

    Solyak, N. A.; Yakovlev, V. P.; Kazakov, S. Yu.; Hirshfield, J. L.

    2009-01-01

    A preliminary design is presented for a two-cavity harmonic multiplier, intended as a high-power RF source for use in experiments aimed at developing high-gradient structures for a future collider. The harmonic multiplier is to produce power at selected frequencies in K-band (18-26.5 GHz) using as an RF driver an XK-5 S-band klystron (2.856 GHz). The device is to be built with a TE111 rotating mode input cavity and interchangeable output cavities running in the TEn11 rotating mode, with n = 7,8,9 at 19.992, 22.848, and 25.704 GHz. An example for a 7th harmonic multiplier is described, using a 250 kV, 20 A injected laminar electron beam; with 10 MW of S-band drive power, 4.7 MW of 20-GHz output power is predicted. Details are described of the magnetic circuit, cavities, and output coupler.

  20. Near-threshold harmonics from a femtosecond enhancement cavity-based EUV source: effects of multiple quantum pathways on spatial profile and yield.

    PubMed

    Hammond, T J; Mills, Arthur K; Jones, David J

    2011-12-05

    We investigate the photon flux and far-field spatial profiles for near-threshold harmonics produced with a 66 MHz femtosecond enhancement cavity-based EUV source operating in the tight-focus regime. The effects of multiple quantum pathways in the far-field spatial profile and harmonic yield show a strong dependence on gas jet dynamics, particularly nozzle diameter and position. This simple system, consisting of only a 700 mW Ti:Sapphire oscillator and an enhancement cavity produces harmonics up to 20 eV with an estimated 30-100 μW of power (intracavity) and > 1μW (measured) of power spectrally-resolved and out-coupled from the cavity. While this power is already suitable for applications, a quantum mechanical model of the system indicates substantial improvements should be possible with technical upgrades.

  1. FPGA Techniques Based New Hybrid Modulation Strategies for Voltage Source Inverters

    PubMed Central

    Sudha, L. U.; Baskaran, J.; Elankurisil, S. A.

    2015-01-01

    This paper corroborates three different hybrid modulation strategies suitable for single-phase voltage source inverter. The proposed method is formulated using fundamental switching and carrier based pulse width modulation methods. The main tale of this proposed method is to optimize a specific performance criterion, such as minimization of the total harmonic distortion (THD), lower order harmonics, switching losses, and heat losses. The proposed method is articulated using fundamental switching and carrier based pulse width modulation methods. Thus, the harmonic pollution in the power system will be reduced and the power quality will be augmented with better harmonic profile for a target fundamental output voltage. The proposed modulation strategies are simulated in MATLAB r2010a and implemented in a Xilinx spartan 3E-500 FG 320 FPGA processor. The feasibility of these modulation strategies is authenticated through simulation and experimental results. PMID:25821852

  2. Blind separation of overlapping partials in harmonic musical notes using amplitude and phase reconstruction

    NASA Astrophysics Data System (ADS)

    de León, Jesús Ponce; Beltrán, José Ramón

    2012-12-01

    In this study, a new method of blind audio source separation (BASS) of monaural musical harmonic notes is presented. The input (mixed notes) signal is processed using a flexible analysis and synthesis algorithm (complex wavelet additive synthesis, CWAS), which is based on the complex continuous wavelet transform. When the harmonics from two or more sources overlap in a certain frequency band (or group of bands), a new technique based on amplitude similarity criteria is used to obtain an approximation to the original partial information. The aim is to show that the CWAS algorithm can be a powerful tool in BASS. Compared with other existing techniques, the main advantages of the proposed algorithm are its accuracy in the instantaneous phase estimation, its synthesis capability and that the only input information needed is the mixed signal itself. A set of synthetically mixed monaural isolated notes have been analyzed using this method, in eight different experiments: the same instrument playing two notes within the same octave and two harmonically related notes (5th and 12th intervals), two different musical instruments playing 5th and 12th intervals, two different instruments playing non-harmonic notes, major and minor chords played by the same musical instrument, three different instruments playing non-harmonically related notes and finally the mixture of a inharmonic instrument (piano) and one harmonic instrument. The results obtained show the strength of the technique.

  3. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required. - Highlights: • Higher-order cubature points for degrees 7 to 9 are developed. • The effects of quadrature rule on the mass and stiffness matrices has been conducted. • The cubature points have always positive integration weights. • Freeing from the inversion of a wide bandwidth mass matrix. • The accuracy of the TSEM has been improved in about one order of magnitude.« less

  4. Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment

    NASA Astrophysics Data System (ADS)

    Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.

    2016-07-01

    The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.

  5. Optical system design of a speckle-free ultrafast Red-Green-Blue (RGB) source based on angularly multiplexed second harmonic generation from a TZDW source

    NASA Astrophysics Data System (ADS)

    Yao, Yuhong; Knox, Wayne H.

    2015-03-01

    We report the optical system design of a novel speckle-free ultrafast Red-Green-Blue (RGB) source based on angularly multiplexed simultaneous second harmonic generation from the efficiently generated Stokes and anti-Stokes pulses from a commercially available photonic crystal fiber (PCF) with two zero dispersion wavelengths (TZDW). We describe the optimized configuration of the TZDW fiber source which supports excitations of dual narrow-band pulses with peak wavelengths at 850 nm, 1260 nm and spectral bandwidths of 23 nm, 26 nm, respectively within 12 cm of commercially available TZDW PCF. The conversion efficiencies are as high as 44% and 33% from the pump source (a custom-built Yb:fiber master-oscillator-power-amplifier). As a result of the nonlinear dynamics of propagation, the dual pulses preserve their ultrashort pulse width (with measured autocorrelation traces of 200 fs and 227 fs,) which eliminates the need for dispersion compensation before harmonic generation. With proper optical design of the free-space harmonic generation system, we achieve milli-Watt power level red, green and blue pulses at 630 nm, 517 nm and 425 nm. Having much broader spectral bandwidths compared to picosecond RGB laser sources, the source is inherently speckle-free due to the ultra-short coherence length (<37 μm) while still maintaining an excellent color rendering capability with >99.4% excitation purities of the three primaries, leading to the coverage of 192% NTSC color gamut (CIE 1976). The reported RGB source features a very simple system geometry, its potential for power scaling is discussed with currently available technologies.

  6. From Points to Patterns - Functional Relations between Groundwater Connectivity and Catchment-scale Streamflow Response

    NASA Astrophysics Data System (ADS)

    Rinderer, M.; McGlynn, B. L.; van Meerveld, I. H. J.

    2016-12-01

    Groundwater measurements can help us to improve our understanding of runoff generation at the catchment-scale but typically only provide point-scale data. These measurements, therefore, need to be interpolated or upscaled in order to obtain information about catchment scale groundwater dynamics. Our approach used data from 51 spatially distributed groundwater monitoring sites in a Swiss pre-alpine catchment and time series clustering to define six groundwater response clusters. Each of the clusters was characterized by distinctly different site characteristics (i.e., Topographic Wetness Index and curvature), which allowed us to assign all unmonitored locations to one of these clusters. Time series modeling and the definition of response thresholds (i.e., the depth of more transmissive soil layers) allowed us to derive maps of the spatial distribution of active (i.e., responding) locations across the catchment at 15 min time intervals. Connectivity between all active locations and the stream network was determined using a graph theory approach. The extent of the active and connected areas differed during events and suggests that not all active locations directly contributed to streamflow. Gate keeper sites prevented connectivity of upslope locations to the channel network. Streamflow dynamics at the catchment outlet were correlated to catchment average connectivity dynamics. In a sensitivity analysis we tested six different groundwater levels for a site to be considered "active", which showed that the definition of the threshold did not significantly influence the conclusions drawn from our analysis. This study is the first one to derive patterns of groundwater dynamics based on empirical data (rather than interpolation) and provides insight into the spatio-temporal evolution of the active and connected runoff source areas at the catchment-scale that is critical to understanding the dynamics of water quantity and quality in streams.

  7. A Comparative Study of Three Spatial Interpolation Methodologies for the Analysis of Air Pollution Concentrations in Athens, Greece

    NASA Astrophysics Data System (ADS)

    Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios

    2010-01-01

    Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.

  8. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  9. An assessment of mean-field mixed semiclassical approaches: Equilibrium populations and algorithm stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellonzi, Nicole; Jain, Amber; Subotnik, Joseph E.

    2016-04-21

    We study several recent mean-field semiclassical dynamics methods, focusing on the ability to recover detailed balance for long time (equilibrium) populations. We focus especially on Miller and Cotton’s [J. Phys. Chem. A 117, 7190 (2013)] suggestion to include both zero point electronic energy and windowing on top of Ehrenfest dynamics. We investigate three regimes: harmonic surfaces with weak electronic coupling, harmonic surfaces with strong electronic coupling, and anharmonic surfaces with weak electronic coupling. In most cases, recent additions to Ehrenfest dynamics are a strong improvement upon mean-field theory. However, for methods that include zero point electronic energy, we show thatmore » anharmonic potential energy surfaces often lead to numerical instabilities, as caused by negative populations and forces. We also show that, though the effect of negative forces can appear hidden in harmonic systems, the resulting equilibrium limits do remain dependent on any windowing and zero point energy parameters.« less

  10. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  11. Next generation data harmonization

    NASA Astrophysics Data System (ADS)

    Armstrong, Chandler; Brown, Ryan M.; Chaves, Jillian; Czerniejewski, Adam; Del Vecchio, Justin; Perkins, Timothy K.; Rudnicki, Ron; Tauer, Greg

    2015-05-01

    Analysts are presented with a never ending stream of data sources. Often, subsets of data sources to solve problems are easily identified but the process to align data sets is time consuming. However, many semantic technologies do allow for fast harmonization of data to overcome these problems. These include ontologies that serve as alignment targets, visual tools and natural language processing that generate semantic graphs in terms of the ontologies, and analytics that leverage these graphs. This research reviews a developed prototype that employs all these approaches to perform analysis across disparate data sources documenting violent, extremist events.

  12. Statistical studies of Pc 3-5 pulsations and their relevance for possible source mechanisms of ULF waves

    NASA Technical Reports Server (NTRS)

    Anderson, Brian J.

    1993-01-01

    A number of statistical studies using spacecraft data have been made of ULF waves in the magnetosphere. These studies provide an overview of ULF pulsation activity for r = 5-15 R(E) and allow an assessment of likely source mechanisms. In this review pulsations are categorized into five general types: compressional Pc 5, poloidal Pc 4, toroidal harmonics, toroidal Pc 5 (fundamental mode), and incoherent noise. The occurrence distributions and/or distributions of wave power of the different types suggest that compressional Pc 5 and poloidal Pc 4 derive their energy locally, most likely from energetic protons. The toroidal pulsations, both harmonic and fundamental mode, appear to be driven by an energy source outside the magnetopause - directly upstream in the sheath and solar wind for harmonics and the flanks for fundamentals. Incoherent pulsations are a prominent pulsation type but from their occurrence distribution alone it is unclear what their dominant energy source may be.

  13. Multimode Directional Coupler for Utilization of Harmonic Frequencies from TWTAs

    NASA Technical Reports Server (NTRS)

    Simmons, Rainee N.; Wintucky, Edwin G.

    2013-01-01

    A novel waveguide multimode directional coupler (MDC) intended for the measurement and potential utilization of the second and higher order harmonic frequencies from high-power traveling wave tube amplifiers (TWTAs) has been successfully designed, fabricated, and tested. The design is based on the characteristic multiple propagation modes of the electrical and magnetic field components of electromagnetic waves in a rectangular waveguide. The purpose was to create a rugged, easily constructed, more efficient waveguide- based MDC for extraction and exploitation of the second harmonic signal from the RF output of high-power TWTs used for space communications. The application would be a satellitebased beacon source needed for Qband and V/W-band atmospheric propagation studies. The MDC could function as a CW narrow-band source or as a wideband source for study of atmospheric group delay effects on highdata- rate links. The MDC is fabricated from two sections of waveguide - a primary one for the fundamental frequency and a secondary waveguide for the second harmonic - that are joined together such that the second harmonic higher order modes are selectively coupled via precision- machined slots for propagation in the secondary waveguide. In the TWTA output waveguide port, both the fundamental and the second harmonic signals are present. These signals propagate in the output waveguide as the dominant and higher order modes, respectively. By including an appropriate mode selective waveguide directional coupler, such as the MDC presented here at the output of the TWTA, the power at the second harmonic can be sampled and amplified to the power level needed for atmospheric propagation studies. The important conclusions from the preliminary test results for the multimode directional coupler are: (1) the second harmonic (Ka-band) can be measured and effectively separated from the fundamental (Ku-band) with no coupling of the latter, (2) power losses in the fundamental frequency are negligible, and (3) the power level of the extracted second harmonic is sufficient for further amplification to power levels needed for practical applications. It was also demonstrated that third order and potentially higher order harmonics are measurable with this device. The design is frequency agnostic, and with the appropriate choice of waveguides, is easily scaled to higher frequency TWTs. The MDC has the same function but with a number of important advantages over the conventional diplexer.

  14. An analysis of methods for gravity determination and their utilization for the calculation of geopotential numbers in the Slovak national levelling network

    NASA Astrophysics Data System (ADS)

    Majkráková, Miroslava; Papčo, Juraj; Zahorec, Pavol; Droščák, Branislav; Mikuška, Ján; Marušiak, Ivan

    2016-09-01

    The vertical reference system in the Slovak Republic is realized by the National Levelling Network (NLN). The normal heights according to Molodensky have been introduced as reference heights in the NLN in 1957. Since then, the gravity correction, which is necessary to determine the reference heights in the NLN, has been obtained by an interpolation either from the simple or complete Bouguer anomalies. We refer to this method as the "original". Currently, the method based on geopotential numbers is the preferred way to unify the European levelling networks. The core of this article is an analysis of different ways to the gravity determination and their application for the calculation of geopotential numbers at the points of the NLN. The first method is based on the calculation of gravity at levelling points from the interpolated values of the complete Bouguer anomaly using the CBA2G_SK software. The second method is based on the global geopotential model EGM2008 improved by the Residual Terrain Model (RTM) approach. The calculated gravity is used to determine the normal heights according to Molodensky along parts of the levelling lines around the EVRF2007 datum point EH-V. Pitelová (UELN-1905325) and the levelling line of the 2nd order NLN to Kráľova hoľa Mountain (the highest point measured by levelling). The results from our analysis illustrate that the method based on the interpolated value of gravity is a better method for gravity determination when we do not know the measured gravity. It was shown that this method is suitable for the determination of geopotential numbers and reference heights in the Slovak national levelling network at the points in which the gravity is not observed directly. We also demonstrated the necessity of using the precise RTM for the refinement of the results derived solely from the EGM2008.

  15. Azimuthally differential pion femtoscopy relative to the second and third harmonic in Pb-Pb 2.76 TeV collision from ALICE

    NASA Astrophysics Data System (ADS)

    Saleh, Mohammad; Alice Collaboration

    2017-11-01

    Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the nature and dynamics of the system evolution. While the HBT radii modulations relative to the second harmonic event plane reflect mostly the spatial geometry of the source, the third harmonic results are mostly defined by the velocity fields [S. A. Voloshin, J. Phys. G38 (2011) 124097. arXiv:arxiv:arXiv:1106.5830, doi:http://dx.doi.org/10.1088/0954-3899/38/12/124097]. Radii variations with respect to the third harmonic event plane unambiguously signal a collective expansion and anisotropy in the flow fields. Event shape engineering (ESE) is a technique proposed to select events corresponding to a particular shape. Azimuthally differential HBT combined with ESE allows for a detailed analysis of the relation between initial geometry, anisotropic flow and the deformation of source shape. We present azimuthally differential pion femtoscopy with respect to second and third harmonic event planes as a function of the pion transverse momentum for different collision centralities in Pb-Pb collisions at √{sNN} = 2.76 TeV. All these results are compared to existing models. The effects of the selection of the events with high elliptic or triangular flow are also presented.

  16. Supporting spatial data harmonization process with the use of ontologies and Semantic Web technologies

    NASA Astrophysics Data System (ADS)

    Strzelecki, M.; Iwaniak, A.; Łukowicz, J.; Kaczmarek, I.

    2013-10-01

    Nowadays, spatial information is not only used by professionals, but also by common citizens, who uses it for their daily activities. Open Data initiative states that data should be freely and unreservedly available for all users. It also applies to spatial data. As spatial data becomes widely available it is essential to publish it in form which guarantees the possibility of integrating it with other, heterogeneous data sources. Interoperability is the possibility to combine spatial data sets from different sources in a consistent way as well as providing access to it. Providing syntactic interoperability based on well-known data formats is relatively simple, unlike providing semantic interoperability, due to the multiple possible data interpretation. One of the issues connected with the problem of achieving interoperability is data harmonization. It is a process of providing access to spatial data in a representation that allows combining it with other harmonized data in a coherent way by using a common set of data product specification. Spatial data harmonization is performed by creating definition of reclassification and transformation rules (mapping schema) for source application schema. Creation of those rules is a very demanding task which requires wide domain knowledge and a detailed look into application schemas. The paper focuses on proposing methods for supporting data harmonization process, by automated or supervised creation of mapping schemas with the use of ontologies, ontology matching methods and Semantic Web technologies.

  17. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  18. RF-Based Location Using Interpolation Functions to Reduce Fingerprint Mapping

    PubMed Central

    Ezpeleta, Santiago; Claver, José M.; Pérez-Solano, Juan J.; Martí, José V.

    2015-01-01

    Indoor RF-based localization using fingerprint mapping requires an initial training step, which represents a time consuming process. This location methodology needs a database conformed with RSSI (Radio Signal Strength Indicator) measures from the communication transceivers taken at specific locations within the localization area. But, the real world localization environment is dynamic and it is necessary to rebuild the fingerprint database when some environmental changes are made. This paper explores the use of different interpolation functions to complete the fingerprint mapping needed to achieve the sought accuracy, thereby reducing the effort in the training step. Also, different distributions of test maps and reference points have been evaluated, showing the validity of this proposal and necessary trade-offs. Results reported show that the same or similar localization accuracy can be achieved even when only 50% of the initial fingerprint reference points are taken. PMID:26516862

  19. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  20. A climatically-derived global soil moisture data set for use in the GLAS atmospheric circulation model seasonal cycle experiment

    NASA Technical Reports Server (NTRS)

    Willmott, C. J.; Field, R. T.

    1984-01-01

    Algorithms for point interpolation and contouring on the surface of the sphere and in Cartesian two-space are developed from Shepard's (1968) well-known, local search method. These mapping procedures then are used to investigate the errors which appear on small-scale climate maps as a result of the all-too-common practice of of interpolating, from irregularly spaced data points to the nodes of a regular lattice, and contouring Cartesian two-space. Using mean annual air temperatures field over the western half of the northern hemisphere is estimated both on the sphere, assumed to be correct, and in Cartesian two-space. When the spherically- and Cartesian-approximted air temperature fields are mapped and compared, the magnitudes (as large as 5 C to 10 C) and distribution of the errors associated with the latter approach become apparent.

  1. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  2. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  3. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  4. Simulation of Pellet Ablation

    NASA Astrophysics Data System (ADS)

    Parks, P. B.; Ishizaki, Ryuichi

    2000-10-01

    In order to clarify the structure of the ablation flow, 2D simulation is carried out with a fluid code solving temporal evolution of MHD equations. The code includes electrostatic sheath effect at the cloud interface.(P.B. Parks et al.), Plasma Phys. Contr. Fusion 38, 571 (1996). An Eulerian cylindrical coordinate system (r,z) is used with z in a spherical pellet. The code uses the Cubic-Interpolated Psudoparticle (CIP) method(H. Takewaki and T. Yabe, J. Comput. Phys. 70), 355 (1987). that divides the fluid equations into non-advection and advection phases. The most essential element of the CIP method is in calculation of the advection phase. In this phase, a cubic interpolated spatial profile is shifted in space according to the total derivative equations, similarly to a particle scheme. Since the profile is interpolated by using the value and the spatial derivative value at each grid point, there is no numerical oscillation in space, that often appears in conventional spline interpolation. A free boundary condition is used in the code. The possibility of a stationary shock will also be shown in the presentation because the supersonic ablation flow across the magnetic field is impeded.

  5. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  6. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  7. Accounting for irregular support in spatial interpolation - analysing the effect of using alternative distance measures

    NASA Astrophysics Data System (ADS)

    Skøien, J. O.; Gottschalk, L.; Leblois, E.

    2009-04-01

    Whereas geostatistical and objective methods mostly have been developed for observations with point support or a regular support, e.g. runoff related data can be assumed to have an irregular support in space, and sometimes also a temporal support. The correlations between observations and between observations and the prediction location are found through an integration of a point variogram or point correlation function, a method known as regularisation. Being a relatively simple method for observations with equal and regular support, it can be computationally demanding if the observations have irregular support. With improved speed of computers, solving such integrations has become easier, but there can still be numerical problems that are not easily solved even with high-resolution computations. This can particularly be a problem in hydrological sciences where catchments are overlapping, the correlations are high, and small numerical errors can give ill-posed covariance matrices. The problem increases with increasing number of spatial and/or temporal dimensions. Gottschalk [1993a; 1993b] suggested to replace the integration by a Taylor expansion, hence reducing the computation time considerably, and also expecting less numerical problems with the covariance matrices. In practice, the integrated correlation/semivariance between observations are replaced by correlations/semivariances using the so called Ghosh-distance. Although Gottschalk and collaborators have used the Ghosh-distance also in other papers [Sauquet, et al., 2000a; Sauquet, et al., 2000b], the properties of the simplification have not been examined in detail. Hence, we will here analyse the replacement of the integration by the use of Ghosh-distances, both in sense of the ability to reproduce regularised semivariogram and correlation values, and the influence on the final interpolated maps. Comparisons will be performed both for real observations with a support (hydrological data) and for more hypothetical observations with regular supports where analytical expressions for the regularised semivariances/correlations in some cases can be derived. The results indicate that the simplification is useful for spatial interpolation when the support of the observations has to be taken into account. The difference in semivariogram value or correlation value between the simplified method and the full integration is limited on short distances, increasing for larger distances. However, this is to some degree taken into account while fitting a model for the point process, so that the results after interpolation are less affected by the simplification. The method is of particular use if computation time is of importance, e.g. in the case of real-time mapping procedures. Gottschalk, L. (1993a) Correlation and covariance of runoff, Stochastic Hydrology and Hydraulics, 7, 85-101. Gottschalk, L. (1993b) Interpolation of runoff applying objective methods, Stochastic Hydrology and Hydraulics, 7, 269-281. Sauquet, E., L. Gottschalk, and E. Leblois (2000a) Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme, Hydrological Sciences Journal, 45, 799-815. Sauquet, E., I. Krasovskaia, and E. Leblois (2000b) Mapping mean monthly runoff pattern using EOF analysis, Hydrology and Earth System Sciences, 4, 79-93.

  8. Extrapolation of Functions of Many Variables by Means of Metric Analysis

    NASA Astrophysics Data System (ADS)

    Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David

    2018-02-01

    The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.

  9. Time domain simulation of nonlinear acoustic beams generated by rectangular pistons with application to harmonic imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xinmai; Cleveland, Robin O.

    2005-01-01

    A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging. .

  10. Time domain simulation of nonlinear acoustic beams generated by rectangular pistons with application to harmonic imaging.

    PubMed

    Yang, Xinmai; Cleveland, Robin O

    2005-01-01

    A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging.

  11. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  12. The moving-least-squares-particle hydrodynamics method (MLSPH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less

  13. Spectral control of high harmonics from relativistic plasmas using bicircular fields

    NASA Astrophysics Data System (ADS)

    Chen, Zi-Yu

    2018-04-01

    We introduce two-color counterrotating circularly polarized laser fields as a way to spectrally control high harmonic generation (HHG) from relativistic plasma mirrors. Through particle-in-cell simulations, we show that only a selected group of harmonic orders can appear owing to the symmetry of the laser fields and the related conservation laws. By adjusting the intensity ratio of the two driving field components, we demonstrate the overall HHG efficiency, the relative intensity of allowed neighboring harmonic orders, and that the polarization state of the harmonic source can be tuned. The HHG efficiency of this scheme can be as high as that driven by a linearly polarized laser field.

  14. 14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...

  15. 14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...

  16. 14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...

  17. 14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...

  18. Determination of rotor harmonic blade loads from acoustic measurements

    NASA Technical Reports Server (NTRS)

    Kasper, P. K.

    1975-01-01

    The magnitude of discrete frequency sound radiated by a rotating blade is strongly influenced by the presence of a nonuniform distribution of aerodynamic forces over the rotor disk. An analytical development and experimental results are provided for a technique by which harmonic blade loads are derived from acoustic measurements. The technique relates, on a one-to-one basis, the discrete frequency sound harmonic amplitudes measured at a point on the axis of rotation to the blade-load harmonic amplitudes. This technique was applied to acoustic data from two helicopter types and from a series of test results using the NASA-Langley Research Center rotor test facility. The inferred blade-load harmonics for the cases considered tended to follow an inverse power law relationship with harmonic blade-load number. Empirical curve fits to the data showed the harmonic fall-off rate to be in the range of 6 to 9 db per octave of harmonic order. These empirical relationships were subsequently used as input data in a compatible far field rotational noise prediction model. A comparison between predicted and measured off-axis sound harmonic levels is provided for the experimental cases considered.

  19. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    NASA Astrophysics Data System (ADS)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.

  20. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  1. A GENERAL ALGORITHM FOR THE CONSTRUCTION OF CONTOUR PLOTS

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1994-01-01

    The graphical presentation of experimentally or theoretically generated data sets frequently involves the construction of contour plots. A general computer algorithm has been developed for the construction of contour plots. The algorithm provides for efficient and accurate contouring with a modular approach which allows flexibility in modifying the algorithm for special applications. The algorithm accepts as input data values at a set of points irregularly distributed over a plane. The algorithm is based on an interpolation scheme in which the points in the plane are connected by straight line segments to form a set of triangles. In general, the data is smoothed using a least-squares-error fit of the data to a bivariate polynomial. To construct the contours, interpolation along the edges of the triangles is performed, using the bivariable polynomial if data smoothing was performed. Once the contour points have been located, the contour may be drawn. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 100K of 8-bit bytes. This computer algorithm was developed in 1981.

  2. Surface mapping of spike potential fields: experienced EEGers vs. computerized analysis.

    PubMed

    Koszer, S; Moshé, S L; Legatt, A D; Shinnar, S; Goldensohn, E S

    1996-03-01

    An EEG epileptiform spike focus recorded with scalp electrodes is clinically localized by visual estimation of the point of maximal voltage and the distribution of its surrounding voltages. We compared such estimated voltage maps, drawn by experienced electroencephalographers (EEGers), with a computerized spline interpolation technique employed in the commercially available software package FOCUS. Twenty-two spikes were recorded from 15 patients during long-term continuous EEG monitoring. Maps of voltage distribution from the 28 electrodes surrounding the points of maximum change in slope (the spike maximum) were constructed by the EEGer. The same points of maximum spike and voltage distributions at the 29 electrodes were mapped by computerized spline interpolation and a comparison between the two methods was made. The findings indicate that the computerized spline mapping techniques employed in FOCUS construct voltage maps with similar maxima and distributions as the maps created by experienced EEGers. The dynamics of spike activity, including correlations, are better visualized using the computerized technique than by manual interpretation alone. Its use as a technique for spike localization is accurate and adds information of potential clinical value.

  3. Cosmological rotating black holes in five-dimensional fake supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nozawa, Masato; Maeda, Kei-ichi; Waseda Research Institute for Science and Engineering, Okubo 3-4-1, Shinjuku, Tokyo 169-8555

    2011-01-15

    In recent series of papers, we found an arbitrary dimensional, time-evolving, and spatially inhomogeneous solution in Einstein-Maxwell-dilaton gravity with particular couplings. Similar to the supersymmetric case, the solution can be arbitrarily superposed in spite of nontrivial time-dependence, since the metric is specified by a set of harmonic functions. When each harmonic has a single point source at the center, the solution describes a spherically symmetric black hole with regular Killing horizons and the spacetime approaches asymptotically to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. We discuss in this paper that in 5 dimensions, this equilibrium condition traces back to the first-order 'Killing spinor'more » equation in 'fake supergravity' coupled to arbitrary U(1) gauge fields and scalars. We present a five-dimensional, asymptotically FLRW, rotating black-hole solution admitting a nontrivial 'Killing spinor', which is a spinning generalization of our previous solution. We argue that the solution admits nondegenerate and rotating Killing horizons in contrast with the supersymmetric solutions. It is shown that the present pseudo-supersymmetric solution admits closed timelike curves around the central singularities. When only one harmonic is time-dependent, the solution oxidizes to 11 dimensions and realizes the dynamically intersecting M2/M2/M2-branes in a rotating Kasner universe. The Kaluza-Klein-type black holes are also discussed.« less

  4. Tunable strength saddle-point contacts impact on quantum rings transmission

    NASA Astrophysics Data System (ADS)

    González, J. J.; Diago-Cisneros, L.

    2016-09-01

    A particular subject of investigation is the role of several sadle-point contact (QPC) parameters on the scattering properties of an Aharonov-Bohm-Aharonov-Casher quantum ring (QR) under Rashba-type spin orbit interaction. We discuss the interplay of the conductance with the confinement strengths and height of the QPC, which yields new and tunable harmonic and non-harmonics patterns, while one manipulates these constriction parameters. This phenomenology may be of utility to implement a novel way to modulate spin interference effects in semiconducting QRs, providing an appealing test-platform for spintronics applications.

  5. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  6. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    NASA Astrophysics Data System (ADS)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2018-01-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method performed second best according to streamflow predictions at the five gauges in the calibration period (01/01/2007–31/12/2011) and four gauges during the validation period (01/01/2012–30/06/2014). However, NN produced the worst prediction at the outlet of the catchment in the validation period, indicating a low robustness. While the IDW exhibited the best performance in the study catchment in terms of accuracy, robustness and efficiency, more general recommendations on the selection of rainfall interpolation methods need to be further explored.

  7. Assessment and modeling of the groundwater hydrogeochemical quality parameters via geostatistical approaches

    NASA Astrophysics Data System (ADS)

    Karami, Shawgar; Madani, Hassan; Katibeh, Homayoon; Fatehi Marj, Ahmad

    2018-03-01

    Geostatistical methods are one of the advanced techniques used for interpolation of groundwater quality data. The results obtained from geostatistics will be useful for decision makers to adopt suitable remedial measures to protect the quality of groundwater sources. Data used in this study were collected from 78 wells in Varamin plain aquifer located in southeast of Tehran, Iran, in 2013. Ordinary kriging method was used in this study to evaluate groundwater quality parameters. According to what has been mentioned in this paper, seven main quality parameters (i.e. total dissolved solids (TDS), sodium adsorption ratio (SAR), electrical conductivity (EC), sodium (Na+), total hardness (TH), chloride (Cl-) and sulfate (SO4 2-)), have been analyzed and interpreted by statistical and geostatistical methods. After data normalization by Nscore method in WinGslib software, variography as a geostatistical tool to define spatial regression was compiled and experimental variograms were plotted by GS+ software. Then, the best theoretical model was fitted to each variogram based on the minimum RSS. Cross validation method was used to determine the accuracy of the estimated data. Eventually, estimation maps of groundwater quality were prepared in WinGslib software and estimation variance map and estimation error map were presented to evaluate the quality of estimation in each estimated point. Results showed that kriging method is more accurate than the traditional interpolation methods.

  8. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  9. On the importance of preserving the harmonics and neighboring partials prior to vocoder processing: implications for cochlear implants.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2010-01-01

    Pre-processing based noise-reduction algorithms used for cochlear implants (CIs) can sometimes introduce distortions which are carried through the vocoder stages of CI processing. While the background noise may be notably suppressed, the harmonic structure and/or spectral envelope of the signal may be distorted. The present study investigates the potential of preserving the signal's harmonic structure in voiced segments (e.g., vowels) as a means of alleviating the negative effects of pre-processing. The hypothesis tested is that preserving the harmonic structure of the signal is crucial for subsequent vocoder processing. The implications of preserving either the main harmonic components occurring at multiples of F0 or the main harmonics along with adjacent partials are investigated. This is done by first pre-processing noisy speech with a conventional noise-reduction algorithm, regenerating the harmonics, and vocoder processing the stimuli with eight channels of stimulation in steady speech-shaped noise. Results indicated that preserving the main low-frequency harmonics (spanning 1 or 3 kHz) alone was not beneficial. Preserving, however, the harmonic structure of the stimulus, i.e., the main harmonics along with the adjacent partials, was found to be critically important and provided substantial improvements (41 percentage points) in intelligibility.

  10. Data harmonization and federated analysis of population-based studies: the BioSHaRE project

    PubMed Central

    2013-01-01

    Abstracts Background Individual-level data pooling of large population-based studies across research centres in international research projects faces many hurdles. The BioSHaRE (Biobank Standardisation and Harmonisation for Research Excellence in the European Union) project aims to address these issues by building a collaborative group of investigators and developing tools for data harmonization, database integration and federated data analyses. Methods Eight population-based studies in six European countries were recruited to participate in the BioSHaRE project. Through workshops, teleconferences and electronic communications, participating investigators identified a set of 96 variables targeted for harmonization to answer research questions of interest. Using each study’s questionnaires, standard operating procedures, and data dictionaries, harmonization potential was assessed. Whenever harmonization was deemed possible, processing algorithms were developed and implemented in an open-source software infrastructure to transform study-specific data into the target (i.e. harmonized) format. Harmonized datasets located on server in each research centres across Europe were interconnected through a federated database system to perform statistical analysis. Results Retrospective harmonization led to the generation of common format variables for 73% of matches considered (96 targeted variables across 8 studies). Authenticated investigators can now perform complex statistical analyses of harmonized datasets stored on distributed servers without actually sharing individual-level data using the DataSHIELD method. Conclusion New Internet-based networking technologies and database management systems are providing the means to support collaborative, multi-center research in an efficient and secure manner. The results from this pilot project show that, given a strong collaborative relationship between participating studies, it is possible to seamlessly co-analyse internationally harmonized research databases while allowing each study to retain full control over individual-level data. We encourage additional collaborative research networks in epidemiology, public health, and the social sciences to make use of the open source tools presented herein. PMID:24257327

  11. High harmonic terahertz confocal gyrotron with nonuniform electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Wenjie; Guan, Xiaotong; Yan, Yang

    2016-01-15

    The harmonic confocal gyrotron with nonuniform electron beam is proposed in this paper in order to develop compact and high power terahertz radiation source. A 0.56 THz third harmonic confocal gyrotron with a dual arc section nonuniform electron beam has been designed and investigated. The studies show that confocal cavity has extremely low mode density, and has great advantage to operate at high harmonic. Nonuniform electron beam is an approach to improve output power and interaction efficiency of confocal gyrotron. A dual arc beam magnetron injection gun for designed confocal gyrotron has been developed and presented in this paper.

  12. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

    NASA Technical Reports Server (NTRS)

    Allred, Joel

    2012-01-01

    Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

  13. Harmonization activities of Noklus - a quality improvement organization for point-of-care laboratory examinations.

    PubMed

    Stavelin, Anne; Sandberg, Sverre

    2018-05-16

    Noklus is a non-profit quality improvement organization that focuses to improve all elements in the total testing process. The aim is to ensure that all medical laboratory examinations are ordered, performed and interpreted correctly and in accordance with the patients' needs for investigation, treatment and follow-up. For 25 years, Noklus has focused on point-of-care (POC) testing in primary healthcare laboratories and has more than 3100 voluntary participants. The Noklus quality system uses different tools to obtain harmonization and improvement: (1) external quality assessment for the pre-examination, examination and postexamination phase to monitor the harmonization process and to identify areas that need improvement and harmonization, (2) manufacturer-independent evaluations of the analytical quality and user-friendliness of POC instruments and (3) close interactions and follow-up of the participants through site visits, courses, training and guidance. Noklus also recommends which tests that should be performed in the different facilities like general practitioner offices, nursing homes, home care, etc. About 400 courses with more than 6000 delegates are organized annually. In 2017, more than 21,000 e-learning programs were completed.

  14. New real-time algorithms for arbitrary, high precision function generation with applications to acoustic transducer excitation

    NASA Astrophysics Data System (ADS)

    Gaydecki, P.

    2009-07-01

    A system is described for the design, downloading and execution of arbitrary functions, intended for use with acoustic and low-frequency ultrasonic transducers in condition monitoring and materials testing applications. The instrumentation comprises a software design tool and a powerful real-time digital signal processor unit, operating at 580 million multiplication-accumulations per second (MMACs). The embedded firmware employs both an established look-up table approach and a new function interpolation technique to generate the real-time signals with very high precision and flexibility. Using total harmonic distortion (THD) analysis, the purity of the waveforms have been compared with those generated using traditional analogue function generators; this analysis has confirmed that the new instrument has a consistently superior signal-to-noise ratio.

  15. Calibration of a high harmonic spectrometer by laser induced plasma emission.

    PubMed

    Farrell, J P; McFarland, B K; Bucksbaum, P H; Gühr, M

    2009-08-17

    We present a method that allows for a convenient switching between high harmonic generation (HHG) and accurate calibration of the vacuum ultraviolet (VUV) spectrometer used to analyze the harmonic spectrum. The accurate calibration of HHG spectra is becoming increasingly important for the determination of electronic structures. The wavelength of the laser harmonics themselves depend on the details of the harmonic geometry and phase matching, making them unsuitable for calibration purposes. In our calibration mode, the target resides directly at the focus of the laser, thereby enhancing plasma emission and suppressing harmonic generation. In HHG mode, the source medium resides in front or after the focus, showing enhanced HHG and no plasma emission lines. We analyze the plasma emission and use it for a direct calibration of our HHG spectra. (c) 2009 Optical Society of America

  16. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  17. Computer Analysis of 400 HZ Aircraft Electrical Generator Test Data.

    DTIC Science & Technology

    1980-06-01

    Data Acquisition System. ............ 6 3 Voltage Waveform with Data Points. ....... 19 14 Zero Crossover Interpolation. ........ 20 5 Numerical...difference between successive positive-sloped zero crossovers of the waveform. However, the exact time of zero crossover is not known. This is because...data sampling and the generator output are not synchronized. This unsynchronization means that data points which correspond with an exact zero crossover

  18. Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights

    NASA Astrophysics Data System (ADS)

    Kwon, K. H.; Lee, D. W.

    2001-08-01

    Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.

  19. Sandia Higher Order Elements (SHOE) v 0.5 alpha

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    SHOE is research code for characterizing and visualizing higher-order finite elements; it contains a framework for defining classes of interpolation techniques and element shapes; methods for interpolating triangular, quadrilateral, tetrahedral, and hexahedral cells using Lagrange and Legendre polynomial bases of arbitrary order; methods to decompose each element into domains of constant gradient flow (using a polynomial solver to identify critical points); and an isocontouring technique that uses this decomposition to guarantee topological correctness. Please note that this is an alpha release of research software and that some time has passed since it was actively developed; build- and run-time issues likelymore » exist.« less

  20. Noise and Sonic Boom Impact Technology. BOOMAP2 Computer Program for Sonic Boom Research. Volume 3. Program Maintenance Manual

    DTIC Science & Technology

    1988-08-01

    the spline coefficients are calculated. 2.2.3.3 GETSEG GETSEG divides the flight into segments where the points are above the critical Mach number. The...first two and the last two points of a segment can be below critical , which is done in order to improve the spline interpolation. There can also be...subcritical points in the track; however, there can be at most only 5.5 seconds between critical points. If there is a 4.5 4 second gap between data

  1. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    PubMed

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology

  2. The Stochastic X-Ray Variability of the Accreting Millisecond Pulsar MAXI J0911-655

    NASA Technical Reports Server (NTRS)

    Bult, Peter

    2017-01-01

    In this work, I report on the stochastic X-ray variability of the 340 hertz accreting millisecond pulsar MAXI J0911-655. Analyzing pointed observations of the XMM-Newton and NuSTAR observatories, I find that the source shows broad band-limited stochastic variability in the 0.01-10 hertz range with a total fractional variability of approximately 24 percent root mean square timing residuals in the 0.4 to 3 kiloelectronvolt energy band that increases to approximately 40 percent root mean square timing residuals in the 3 to 10 kiloelectronvolt band. Additionally, a pair of harmonically related quasi-periodic oscillations (QPOs) are discovered. The fundamental frequency of this harmonic pair is observed between frequencies of 62 and 146 megahertz. Like the band-limited noise, the amplitudes of the QPOs show a steep increase as a function of energy; this suggests that they share a similar origin, likely the inner accretion flow. Based on their energy dependence and frequency relation with respect to the noise terms, the QPOs are identified as low-frequency oscillations and discussed in terms of the Lense-Thirring precession model.

  3. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  4. High-frequency harmonic imaging of the eye.

    PubMed

    Silverman, Ronald H; Coleman, D Jackson; Ketterling, Jeffrey A; Lizzi, Frederic L

    2005-01-01

    PURPOSE: Harmonic imaging has become a well-established technique for ultrasonic imaging at fundamental frequencies of 10 MHz or less. Ophthalmology has benefited from the use of fundamentals of 20 MHz to 50 MHz. Our aim was to explore the ability to generate harmonics for this frequency range, and to generate harmonic images of the eye. METHODS: The presence of harmonics was determined in both water and bovine vitreous propagation media by pulse/echo and hydrophone at a series of increasing excitation pulse intensities and frequencies. Hydrophone measurements were made at the focal point and in the near- and far-fields of 20 MHz and 40 MHz transducers. Harmonic images of the anterior segment of the rabbit eye were obtained by a combination of analog filtering and digital post-processing. RESULTS: Harmonics were generated nearly identically in both water and vitreous. Hydrophone measurements showed the maximum second harmonic to be -5 dB relative to the 35 MHz fundamental at the focus, while in pulse/echo the maximum harmonic amplitude was -15dB relative to the fundamental. Harmonics were absent in the near-field, but present in the far-field. Harmonic images of the eye showed improved resolution. CONCLUSION: Harmonics can be readily generated at very high frequencies, and at power levels compliant with FDA guidelines for ophthalmology. This technique may yield further improvements to the already impressive resolutions obtainable in this frequency range. Improved imaging of the macular region, in particular, may provide significant improvements in diagnosis of retinal disease.

  5. High-frequency harmonic imaging of the eye

    NASA Astrophysics Data System (ADS)

    Silverman, Ronald H.; Coleman, D. Jackson; Ketterling, Jeffrey A.; Lizzi, Frederic L.

    2005-04-01

    Purpose: Harmonic imaging has become a well-established technique for ultrasonic imaging at fundamental frequencies of 10 MHz or less. Ophthalmology has benefited from the use of fundamentals of 20 MHz to 50 MHz. Our aim was to explore the ability to generate harmonics for this frequency range, and to generate harmonic images of the eye. Methods: The presence of harmonics was determined in both water and bovine vitreous propagation media by pulse/echo and hydrophone at a series of increasing excitation pulse intensities and frequencies. Hydrophone measurements were made at the focal point and in the near- and far-fields of 20 MHz and 40 MHz transducers. Harmonic images of the anterior segment of the rabbit eye were obtained by a combination of analog filtering and digital post-processing. Results: Harmonics were generated nearly identically in both water and vitreous. Hydrophone measurements showed the maximum second harmonic to be -5 dB relative to the 35 MHz fundamental at the focus, while in pulse/echo the maximum harmonic amplitude was -15dB relative to the fundamental. Harmonics were absent in the near-field, but present in the far-field. Harmonic images of the eye showed improved resolution. Conclusion: Harmonics can be readily generated at very high frequencies, and at power levels compliant with FDA guidelines for ophthalmology. This technique may yield further improvements to the already impressive resolutions obtainable in this frequency range. Improved imaging of the macular region, in particular, may provide significant improvements in diagnosis of retinal disease.

  6. Auroral kilometric radiation: Wave modes, harmonic and source region electron density structures

    NASA Technical Reports Server (NTRS)

    Benson, R. F.

    1984-01-01

    A change from extraordinary (X) mode to ordinary (0) mode dominance is observed in the auroral kilometric radiation (AKR) detected on ISIS 1 topside sounder ionograms as the source region plasma to gyrofrequency ratio fN/fH varies from 0.1 to 1.3. The X and 0 mode AKR, Z (the slow branch of the X mode) and whistler (W) mode are also observed. The Z mode is typically slightly less intense than the 0-mode. Thw W-mode is confined to frequencies less than fH/2, suggesting that it is the result of field aligned ducted signals reaching the satellite from a source at lower altitudes. Harmonic AKR bands are commonly observed and the 2nd harmonic appears to be due to propagating signals. The deduced (fN/fH) at the bottom of the AKR source region is always less than 0.4 and is typically less than 0.2 during the generation of X-mode AKR, but approaches 0.9 for 0-mode AKR. No large density enhancements were observed within AKR source region density cavities. It is suggested that the observed INTENSE AKR IS cyclotron X-mode radiation rather than plasma frequency 0-mode radiation.

  7. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  8. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising

    PubMed Central

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-01-01

    Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886

  9. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.

    PubMed

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-05-16

    Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.

  10. Analysis of rainfall distribution in Kelantan river basin, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Ros, Faizah; Tosaka, Hiroyuki

    2018-03-01

    Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.

  11. On the need of mode interpolation for data-driven Galerkin models of a transient flow around a sphere

    NASA Astrophysics Data System (ADS)

    Stankiewicz, Witold; Morzyński, Marek; Kotecki, Krzysztof; Noack, Bernd R.

    2017-04-01

    We present a low-dimensional Galerkin model with state-dependent modes capturing linear and nonlinear dynamics. Departure point is a direct numerical simulation of the three-dimensional incompressible flow around a sphere at Reynolds numbers 400. This solution starts near the unstable steady Navier-Stokes solution and converges to a periodic limit cycle. The investigated Galerkin models are based on the dynamic mode decomposition (DMD) and derive the dynamical system from first principles, the Navier-Stokes equations. A DMD model with training data from the initial linear transient fails to predict the limit cycle. Conversely, a model from limit-cycle data underpredicts the initial growth rate roughly by a factor 5. Key enablers for uniform accuracy throughout the transient are a continuous mode interpolation between both oscillatory fluctuations and the addition of a shift mode. This interpolated model is shown to capture both the transient growth of the oscillation and the limit cycle.

  12. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  13. Harmonic balance optimization of terahertz Schottky diode multipliers using an advanced device model

    NASA Technical Reports Server (NTRS)

    Schlecht, E. T.; Chattopadhyay, G.; Maestrini, A.; Pukala, D.; Gill, J.; Mehdi, I.

    2002-01-01

    Substantial proress has been made recently in the advancement of solid state terahertz sources using chains of Schottky diode frequency multipliers. We have developed a harmonic balance simulator and corresponding diode model that incorporates many other factors participating in the diode behavior.

  14. Helicity-Selective Enhancement and Polarization Control of Attosecond High Harmonic Waveforms Driven by Bichromatic Circularly Polarized Laser Fields.

    PubMed

    Dorney, Kevin M; Ellis, Jennifer L; Hernández-García, Carlos; Hickstein, Daniel D; Mancuso, Christopher A; Brooks, Nathan; Fan, Tingting; Fan, Guangyu; Zusin, Dmitriy; Gentry, Christian; Grychtol, Patrik; Kapteyn, Henry C; Murnane, Margaret M

    2017-08-11

    High harmonics driven by two-color counterrotating circularly polarized laser fields are a unique source of bright, circularly polarized, extreme ultraviolet, and soft x-ray beams, where the individual harmonics themselves are completely circularly polarized. Here, we demonstrate the ability to preferentially select either the right or left circularly polarized harmonics simply by adjusting the relative intensity ratio of the bichromatic circularly polarized driving laser field. In the frequency domain, this significantly enhances the harmonic orders that rotate in the same direction as the higher-intensity driving laser. In the time domain, this helicity-dependent enhancement corresponds to control over the polarization of the resulting attosecond waveforms. This helicity control enables the generation of circularly polarized high harmonics with a user-defined polarization of the underlying attosecond bursts. In the future, this technique should allow for the production of bright highly elliptical harmonic supercontinua as well as the generation of isolated elliptically polarized attosecond pulses.

  15. KOSMOS: a universal morph server for nucleic acids, proteins and their complexes.

    PubMed

    Seo, Sangjae; Kim, Moon Ki

    2012-07-01

    KOSMOS is the first online morph server to be able to address the structural dynamics of DNA/RNA, proteins and even their complexes, such as ribosomes. The key functions of KOSMOS are the harmonic and anharmonic analyses of macromolecules. In the harmonic analysis, normal mode analysis (NMA) based on an elastic network model (ENM) is performed, yielding vibrational modes and B-factor calculations, which provide insight into the potential biological functions of macromolecules based on their structural features. Anharmonic analysis involving elastic network interpolation (ENI) is used to generate plausible transition pathways between two given conformations by optimizing a topology-oriented cost function that guarantees a smooth transition without steric clashes. The quality of the computed pathways is evaluated based on their various facets, including topology, energy cost and compatibility with the NMA results. There are also two unique features of KOSMOS that distinguish it from other morph servers: (i) the versatility in the coarse-graining methods and (ii) the various connection rules in the ENM. The models enable us to analyze macromolecular dynamics with the maximum degrees of freedom by combining a variety of ENMs from full-atom to coarse-grained, backbone and hybrid models with one connection rule, such as distance-cutoff, number-cutoff or chemical-cutoff. KOSMOS is available at http://bioengineering.skku.ac.kr/kosmos.

  16. Dose calculation for photon-emitting brachytherapy sources with average energy higher than 50 keV: report of the AAPM and ESTRO.

    PubMed

    Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F

    2012-05-01

    Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.

  17. Polarization control of high order harmonics in the EUV photon energy range.

    PubMed

    Vodungbo, Boris; Barszczak Sardinha, Anna; Gautier, Julien; Lambert, Guillaume; Valentin, Constance; Lozano, Magali; Iaquaniello, Grégory; Delmotte, Franck; Sebban, Stéphane; Lüning, Jan; Zeitoun, Philippe

    2011-02-28

    We report the generation of circularly polarized high order harmonics in the extreme ultraviolet range (18-27 nm) from a linearly polarized infrared laser (40 fs, 0.25 TW) focused into a neon filled gas cell. To circularly polarize the initially linearly polarized harmonics we have implemented a four-reflector phase-shifter. Fully circularly polarized radiation has been obtained with an efficiency of a few percents, thus being significantly more efficient than currently demonstrated direct generation of elliptically polarized harmonics. This demonstration opens up new experimental capabilities based on high order harmonics, for example, in biology and materials science. The inherent femtosecond time resolution of high order harmonic generating table top laser sources renders these an ideal tool for the investigation of ultrafast magnetization dynamics now that the magnetic circular dichroism at the absorption M-edges of transition metals can be exploited.

  18. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  19. Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector

    NASA Astrophysics Data System (ADS)

    Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2014-02-01

    A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.

  20. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection.

    PubMed

    Power, Jonathan D; Plitt, Mark; Kundu, Prantik; Bandettini, Peter A; Martin, Alex

    2017-01-01

    Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).

Top